For a REPL-like interface, you could try the chatgpt-shell package. It can execute code generated by the LLM. It too does this by using org-babel though, it just calls org-babel functions under the hood. It's also OpenAI-only right now, although the author plans to add support for the other major APIs.
gptel has a buffer-centric design because it tries to get out of your way and integrate with your regular Emacs usage. (For example, it's even available _in_ the minibuffer, in that you can call it in the middle of calling another command, and fill the minibuffer prompt itself with the text from an LLM response.)
gptel has a buffer-centric design because it tries to get out of your way and integrate with your regular Emacs usage. (For example, it's even available _in_ the minibuffer, in that you can call it in the middle of calling another command, and fill the minibuffer prompt itself with the text from an LLM response.)