It seems like you might be trying to use it like a search engine, which is a common mistake people make when first trying LLMs. LLMs are not like Google.
The key is to give it context so it can help you. For example, if you want it to help you with Spark configuration, give it the Spark docs. If you want it to help you write code, give it your codebase.
Tools like cursor and the like make this process very easy. You can also set up a local MCP server so the LLM can get the context and tools it needs on its own.
Thank you very much for the ideas here, i will try the approach of giving it context. I havent got into cursor, since i use helix and intellij… i need to look into the MCP server thing
thank you, I will try this too. I feel like I didn't have to do this much work with other models like o1/o3/4o... but if it provide the return value I'm hearing from the hype around Claude I am willing to try.
The key is to give it context so it can help you. For example, if you want it to help you with Spark configuration, give it the Spark docs. If you want it to help you write code, give it your codebase.
Tools like cursor and the like make this process very easy. You can also set up a local MCP server so the LLM can get the context and tools it needs on its own.