Hacker News new | past | comments | ask | show | jobs | submit login

And the AI is trained to write plausible output and pass test cases.

Have you ever tried to generate test cases that were immune to a malicious actor trying to pass your test cases? For example if you are trying to automate homework grading?

The AI writing tests needs to understand the likely problem well enough to know to write a test case for it, but there are an infinite amount of subtle bugs for an AI writing code to choose from.






Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: