I see that the sampling API is OpenAI-compatible (nice!). Considering if we can add a native integration for this to LiteLLM with a provider specific route - `goodfire/`. Would let people test this in projects like aider and dspy.
```python
from litellm import completion
import os
os.environ["GOODFIRE_API_KEY"] = "your-api-key"
response = completion(
model="goodfire/meta-llama/Llama-3.3-70B-Instruct",
messages=[{ "content": "Hello, how are you?","role": "user"}]
)
Saw you just support claude for now. If your backend code is in Python, happy to add support for other models (OpenAI/Gemini/etc.) via a consistent API interface like LiteLLM - https://docs.litellm.ai/docs/#litellm-python-sdk
Interesting — does your backend server use Python? I couldn't find much about it on your site.
It would be great to see this tested with more commercial LLMs (O1 / Amazon Nova, / Llama 3.2 / etc.). If you're open to it, I’d be happy to contribute support for these models via LiteLLM - https://docs.litellm.ai/docs/providers
I see that the sampling API is OpenAI-compatible (nice!). Considering if we can add a native integration for this to LiteLLM with a provider specific route - `goodfire/`. Would let people test this in projects like aider and dspy.
```python
from litellm import completion
import os
os.environ["GOODFIRE_API_KEY"] = "your-api-key"
response = completion( model="goodfire/meta-llama/Llama-3.3-70B-Instruct", messages=[{ "content": "Hello, how are you?","role": "user"}] )
```