>Treat prompts as version-controlled assets. Including prompts in commit messages creates valuable context for future maintenance and debugging.
I think this is valuable data, but it is also out of distribution data. Prior to AI models writing code, this won't be present in the training set. Additional training will probably be needed to correlate better results with the new input stream, and also to learn that some of the records would be of its own unreliability and to develop a healthy scepticism of what it has said in the past.
There's a lot of talk about model collapse with models training purely on their own output, or AI slop infecting training data sets, but ultimately it is all data. Combined with a signal to say which bits were ultimately beneficial, it can all be put to use. Even the failures can provide a good counterfactual signal for constrastive learning.
I think this is valuable data, but it is also out of distribution data. Prior to AI models writing code, this won't be present in the training set. Additional training will probably be needed to correlate better results with the new input stream, and also to learn that some of the records would be of its own unreliability and to develop a healthy scepticism of what it has said in the past.
There's a lot of talk about model collapse with models training purely on their own output, or AI slop infecting training data sets, but ultimately it is all data. Combined with a signal to say which bits were ultimately beneficial, it can all be put to use. Even the failures can provide a good counterfactual signal for constrastive learning.