It's more about how you use them. Asking a general-purpose model like ChatGPT precise k8s questions might prove counterproductive; however feeding the entire k8s documentation into an LLM like Gemini and asking questions that way is invaluable. Not just the documentation, but your entire cluster config. Like you said, "blindly trusting LLMs, you'll find yourself in trouble" this is true, but the same can be said for StackOverflow or any other resource. Sifting through StackOverflow to find the exact answer to your question (and then understanding the answer, and hoping that it pertains to your environment, and version etc) is much less efficient when you can ingest the entire docs, your config, your environment, and your question, and have it spit out exactly what you need in whatever format you need. You can even web search with multiple questions derived automatically from your main question, to gather multiple sources which are aggregated and referenced in the answer so you can easily cross-check for hallucinations. StackOverflow isn't even as easy to fact check as LLMs considering the in-line sources.