"Or, how do you get them to use a recent API that doesn't dominate their training data?"
Paste in the documentation or some examples. I do this all the time - "teaching" an LLM about an API it doesn't know yet is trivially easy if you take advantage of the longer context inputs to models these days.
I've tried this.
I've scraped example pages directly from github, and given them a 200 line file with the instructions "just insert this type of thing", and it will invariably use bad APIs.
I'm at work so I can't try again right now, but last I did was use claude+context, chatGPT 4o with just chatting, Copilot in Neovim, and Aider w/ claude + uploading all the files as context.
It took a long time to get anything that would compile, way longer than just reading + doing, and it was eventually wrong anyway. This is a recurring issue with Rust, and I'd love a workaround since I spend 60+h/week writing it (though not bevy). Probably a skill issue.
I don't know anything about bevy but yeah, that looks like it would be a challenge for the models. In this particular case I'd tell the model how I wanted it to work - rather than "Add a button to the left panel that prints "Hello world" when pressed" I'd say something more like (I'm making up these details): "Use the bevy:Panel class with an inline callback to add a button to the bottom of the left panel".
Or I'd more likely start by asking for options: "What are some options for adding a button to that left panel?" - then pick one that I liked, or prompt it to use an approach it didn't suggest.
After it delivered code, if I didn't like the code it had used I'd tell it: "Don't use that class, use X instead" or "define a separate function for that callback" or whatever.
Paste in the documentation or some examples. I do this all the time - "teaching" an LLM about an API it doesn't know yet is trivially easy if you take advantage of the longer context inputs to models these days.