I think that sort of ratio is the sweet spot for learning. I've been writing an 8086 simulator in C++ and using an LLM for answering specific technical questions I come up with has drastically sped up my progress without it actually doing the work for me.
They can, if you write down your thought process, which is probably what you should do when you are using an LLM to create a product, but what do I know.
You do not have to be as accurate or that specific, you do not have to worry about the way you word or organize things, the LLM can figure it out, as opposed to a blog post.
So "To some people the process leading to a finished project is the most interesting thing about posts like these." is bullshit, that is said by someone who has never used LLM properly. You can achieve it with LLMs. You definitely can, I know, I did, accurately (I double checked).
How come? You had different experiences? Which LLMs, what prompts? Give me all the details that supports your claim that it is not true. My experiences completely differ from yours, so the way I use it, it is very much true.
That said, it is probably pointless to argue with full-blown AI-skeptics.
People had lots of great and productive-enhancing experiences with LLMs, you did not, great, that does not reflect the tool, it reflects your way of using the tool.