> A very good piece that clearly illustrates one of the dangers with LLS's: responsibility for code quality is blindly offloaded on the automatic system
It does not illustrate that at all.
> Claude's output was thoroughly reviewed by Cloudflare engineers with careful attention paid to security and compliance with standards.
> To emphasize, *this is not "vibe coded"*. Every line was thoroughly reviewed and cross-referenced with relevant RFCs, by security experts with previous experience with those RFCs.
The humans who worked on it very, very clearly took responsibility for code quality. That they didn’t get it 100% right does not mean that they “blindly offloaded responsibility”.
Perhaps you can level that accusation at other people doing different things, but Cloudflare explicitly placed the responsibility for this on the humans.
Studies have shown that the more people use automated systems, the more they start to trust them, leaving to oversight. It's called [automation bias](https://en.m.wikipedia.org/wiki/Automation_bias).
If a Cloudflare security engineer ends up missing the use of a deprecated functionality during a public experiment where they know they'll face very intense scrutiny, what will happen down the line, when llm use is normalized in security contexts?
It does not illustrate that at all.
> Claude's output was thoroughly reviewed by Cloudflare engineers with careful attention paid to security and compliance with standards.
> To emphasize, *this is not "vibe coded"*. Every line was thoroughly reviewed and cross-referenced with relevant RFCs, by security experts with previous experience with those RFCs.
— https://github.com/cloudflare/workers-oauth-provider
The humans who worked on it very, very clearly took responsibility for code quality. That they didn’t get it 100% right does not mean that they “blindly offloaded responsibility”.
Perhaps you can level that accusation at other people doing different things, but Cloudflare explicitly placed the responsibility for this on the humans.