Exhibit A is a github user joelreymont who seems to be making a habit of this behavior. He did a very similar spam on ocaml github.com/ocaml/ocaml/pull/14369
Reminds me of Blindsight by Peter Watts. Aliens viewed our radio signals as a type of malware aimed to consume the resources of a recipient for zero payoff and reduced fitness. This is the same.
This is absolutely insane. If you look at joelreymonts recent activity on GitHub, there is what I would consider an bomb of AI slop PRs with thousands and thousands of changes, AI generated summaries/notes, copyright issues, you name it.
People like this are ruining open source for the rest of us, and those poor maintainers really have their work cut out for them sorting through this slop.
What are you going to do? You can't expect some sort of self-censorship based on righteousness and morals. I see joelreymonts as a pioneer planting chestertons fences. LET THE MAN COOK!
IMO there was something of a de facto contract, pre-LLMs, that the set of things one would publicly mirror/excerpt/index and the set of things one would scrape were one and the same.
Back then, legitimate search engines wouldn’t want to scrape things that would just make their search results less relevant with garbage data anyways, so by and large they would honor robots.txt and not overwhelm upstream servers. Bad actors existed, of course, but were very rarely backed by companies valued in the billions of dollars.
People training foundation models now have no such constraints or qualms - they need as many human-written sentences as possible, regardless of the context in which they are extracted. That’s coupled with a broader familiarity with ubiquitous residential proxy providers that can tunnel traffic through consumer connections worldwide. That’s an entirely different social contract, one we are still navigating.
That's a shortcut, llm providers are very short sighted but not to that extreme, alive websites are needed to produce new data for future trainings.
Edit: damn I've seen this movie before
Not the exact same problem, but a few months ago, I tried to block youtube traffic from my home (I was writing a parental app for my child) by IP. After a few hours of trying to collect IPs, I gave up, realizing that YouTube was dynamically load-balanced across millions of IPs, some of which also served traffic from other Google services I didn't want to block.
I wouldn't be surprised if it was the same with LLMs. Millions of workers allocated dynamically on AWS, with varying IPs.
In my specific case, as I was dealing with browser-initiated traffic, I wrote a Firefox add-on instead. No such shortcut for web servers, though.
I did that, but my router doesn't offer a documented API (or even a ssh access) that I can use to reprogram DNS blocks dynamically. I wanted to stop YouTube only during homework hours, so enabling/disabling it a few times per day quickly became tiresome.
Your router almost certainly lets you assign a DNS instead of using whatever your ISP sends down so you set it to an internal device running your DNS.
Your DNS mostly passes lookup requests but during homework time, when there's a request for the ip for "www.youtube.com" it returns the ip of your choice instead of the actual one. The domain's TTL is 5 minutes.
Or don't, technical solutions to social problems are of limited value.
I think dnsmasq plus a cron on a server of your choice will do this pretty easily. With an LLM you could set this up in less than 15 minutes if you already have a server somewhere (even one in the home).
In this case, I don't have a server I can conveniently use as DNS. Plus I wanted to also control the launching of some binaries, so that would considerably complicate the architecture.
Yes, my kid has ADHD. The browser add-on does the job at slowing down the impulse of going to YouTube (and a few online gaming sites) during homework hours.
I've deployed the same one for me, but setup for Reddit during work hours.
Both of us know how to get around the add-on. It's not particularly hard. But since Firefox is the primary browser for both of us, it does the trick.
They rely on residential proxies powered by botnets — often built by compromising IoT devices (see: https://krebsonsecurity.com/2025/10/aisuru-botnet-shifts-fro... ). In other words, many AI startups — along with the corporations and VC funds backing them — are indirectly financing criminal botnets.
You cannot block LLM crawlers by IP address, because some of them use residential proxies. Source: 1) a friend admins a slightly popular site and has decent bot detection heuristics, 2) just Google “residential proxy LLM”, they are not exactly hiding. Strip-mining original intellectual property for commercial usage is big business.
How does this work? Why would people let randos use their home internet connections? I googled it but the companies selling these services are not exactly forthcoming on how they obtained their "millions of residential IP addresses".
Are these botnets? Are AI companies mass-funding criminal malware companies?
>Are these botnets? Are AI companies mass-funding criminal malware companies?
Without a doubt some of them are botnets. AI companies got their initial foothold by violating copyright en masse with pirated textbook dumps for training data, and whatnot. Why should they suddenly develop scruples now?
It used to be Hola VPN which would let you use someone else’s connection and in the same way someone could use yours which was communicated transparently, that same hola client would also route business users. Im sure many other free VPN clients do the same thing nowadays.
so user either has a malware proxy running requests without being noticed or voluntarily signed up as a proxy to make extra $ off their home connection. Either way I dont care if their IP is blocked. Only problem is if users behind CGNAT get their IP blocked then legitimate users may later be blocked.
edit: ah yes another person above mentioned VPN's thats a good possibility, also another vector is users on mobile can sell their extra data that they dont use to 3rd parties. probably many more ways to acquire endpoints.
“Known IP addresses” to me implies an infrequently changing list of large datacenter ranges. Maintaining a dynamic list (along with any metadata required for throttling purposes) of individual IPs is a different undertaking with higher level of effort.
Of course, if you don’t care about affecting genuine users then it is much simpler. One could say it’s collateral damage and show a message suggesting to boycott companies and/or business practices that prompted these measures.
The only real difference that LLM crawlers tend to not respect /robots.txt and some of them hammer sites with some pretty heavy traffic.
The trap in the article has a link. Bots are instructed not to follow the link. The link is normally invisible to humans. A client that visits the link is probably therefore a poorly behaved bot.
Recently there have been more crawlers coming from tens to hundreds of IP netblocks from dozens (or more!) of ASN in highly time and URL correlated fashion with spoofed user-agent(s) and no regard for rate or request limiting or robots.txt. These attempt to visit every possible permutation of URLs on the domain and have a lot of bandwidth and established tcp connections available to them. It's not that this didn't happen pre-2023 but it's noticably more common now. If you have a public webserver you've probably experienced it at least once.
Actual LLM involvement as the requesting user-agent is vanishingly small. It's the same problem as ever: corporations, their profit motive during $hypecycle coupled with access to capital for IT resources, and the protection of the abusers via the company's abstraction away of legal liability for their behavior.
The crawlers themselves are not that different: it is their number, how the information is used once scraped (including referencing or lack thereof), and if they obey the rules:
1. Their number: every other company and the mangy mutt that is its mascot is scraping for LLMs at the moment, so you get hit by them far more than you get hit by search engine bots and similar. This makes them harder to block too, because even ignoring tricks like using botnets to spread requests over many source addresses (potentially the residential connections of unwitting users infected by malware) the share number coming from so many places, new places all the time, means you can not maintain a practical blocklist of source addresses. The number of scrapers out there means that small sites can be easily swamped, much like when HN, slashdot, or a popular reddit subsection, links to a site, and it gets “hugged to death” by a sudden glut of individual people who are interested.
2. Use of the information: Search engines actually provide something back: sending people to your site. Useful if that is desirable which in many cases it is. LLMs don't tend to do that though: by the very nature of LLMs very few results from them come with any indication of the source of the data they use for their guesswork. They scrape, they take, they give nothing back. Search engines had a vested interest in your site surviving as they don't want to hand out dead links, those scraping for LLMs have no such requirement because they can still summarise your work from what is effectively cached within their model. This isn't unique to LLMs, go back a few years to the pre-LLM days and you will find several significant legal cases about search engines offering summaries of the information found instead of just sending people to the site where the information is.
3. Ignoring rules: Because so many sites are attempting to block scrapers now, usually at a minimum using accepted methods to discourage it (robots.txt, nofollow attributes, etc.), these signals are just ignored. Sometimes this is malicious with people running the scrapers simply not caring despite knowing the problem they could create, sometimes it is like the spam problem in mail: each scraper thinks it'll be fine because it is only them, with each of the many also thinking the same thing… With people as big as Meta openly defending piracy as just fine for the purposes of LLM training, others see that as a declaration of open season. Those that are malicious or at least amoral (most of them) don't care. Once they have scraped your data they have, as mentioned above, no vested interest in whether your site lives or dies (either by withing away from lack of attention or falling over under their load to never be brought back up), in fact they might have incentive to want your site dead: it would no longer compete with the LLM as a source of information.
No one of these is the problem, but together they are a significant problem.
That would require the US government to prioritize the interests of the American people over the interests of a few corporations and the wealthy individuals with a significant financial interest in those corporations.
reply