Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

One of the exciting things to me about the ai agents is how they push and allow you to build processes that we’ve always known were important but were frequently not prioritized in the face of shipping the system.

You can use how uncomfortable you are with the ai doing something as a signal that you need to invest in systematic verification of that something. As a for instance in the link, the team could build a system for verifying and validating their data migrations. That would move a whole class of changes into the ai relm.

This is usually much easier to quantify and explain externally than nebulous talk about tech debt in that system.



For sure. Another interesting trick I found to be surprisingly effective is to ask Claude Code to “Look around the codebase, and if something is confusing, or weird/counterintuitive — drop a AIDEV-QUESTION: … comment so I can document that bit of code and/or improve it”. We found some really gnarly things that had been forgotten in the codebase.


But why stick with the AIDEV- style? I mean, this is what comments are for: explaining the why that can’t be read from code. You could just make it a regular comment, right?


yup but:

1. AIDEV-* is easier to visually distinguish, and

2. it is grep-able by the models, so they can "look around" the codebase in one glance


Agreed, my hunch is that you might use higher abstraction-level validation tools like acceptance and property tests, or even formal verification, as the relative cost of boilerplate decreases.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: