> We recommend that you put this on a local fork as we really want the service to be as lightweight and simple as possible as we see this asa good entry point into new developers.
Sadly, it seems like you're recommending forking the library instead of allowing people to use local LLMs. You were smart enough to lock the PR from any further conversation at least :)
You can override the default OpenAI url using an environment variable (iirc, OPENAI_API_BASE). Any LLM provider / inference server offering an OpenAI-compatible API will work.
Granted they use the `openai` python library (or other library/implementation that uses that same env var), hence my question in the previous-previous comment...
> We recommend that you put this on a local fork as we really want the service to be as lightweight and simple as possible as we see this asa good entry point into new developers.
Sadly, it seems like you're recommending forking the library instead of allowing people to use local LLMs. You were smart enough to lock the PR from any further conversation at least :)