I've personally found LLMs to be particularly helpful to get started with something I have trouble with: surely, they'll most certainly get it wrong (unless it's something trivial), but it gives you enough momentum to keep going even if you end up discarding its original output completely