Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
[flagged] Open-source framework for real-time AI voice (github.com/videosdk-live)
27 points by sagarkava 84 days ago | hide | past | favorite | 15 comments


Hey

I’m Sagar, co-founder of VideoSDK.

I'm beyond excited to share what we've been building: VideoSDK Real-Time AI Agents. Today, voice is becoming the new UI.

We expect agents to feel human, to understand us, respond instantly, and work seamlessly across web, mobile, and even telephony. But, to achieve this, developers have to stitch together: STT, LLM, TTS, glued with HTTP endpoints and, a prayer.

This most often results in agents that sound robotic, hallucinations and fail in product environments without observability. So we built something to solve that.

Now, we are open sourcing it!

Here’s what it offers:

- Global WebRTC infra with <80ms latency - Native turn detection, VAD, and noise suppression - Modular pipelines for STT, LLM, TTS, avatars, and - real-time model switching - Built-in RAG + memory for grounding and hallucination resistance - SDKs for web, mobile, Unity, IoT, and telephony — no glue code needed - Agent Cloud to scale infinitely with one-click deployments — or self-host with full control Think of it like moving from a walkie-talkie to a modern network tower that handles thousands of calls.

VideoSDK gives you the infrastructure to build voice agents that actually work in the real world, at scale.

I'd love your thoughts and questions! Happy to dive deep into architecture, use cases, or crazy edge cases you've been struggling with.


Good! Is there way to prompt the TTS output tone like elevenlabs https://elevenlabs.io/docs/best-practices/prompting/eleven-v...

We are building AI companions, the tone prompting would be great


Hey bigcat12345678, great question!

Yes, with VideoSDK's Real-Time AI Agents, you can control the TTS output tone, either via prompt engineering (if your TTS provider supports it, like ElevenLabs) or by integrating custom models that support tonal control directly. Our modular pipeline architecture makes it easy to plug in providers like ElevenLabs and pass tone/style prompts dynamically per utterance.

We actually support ElevenLabs out of the box. You can check out the integration details here: https://docs.videosdk.live/ai_agents/plugins/tts/eleven-labs

So if you're building AI companions and want them to sound calm, excited, empathetic, etc., you can absolutely prompt for those tones in real time, or even switch voices or tones mid-conversation based on context or user emotion.

Let us know what you're building. Happy to dive deeper into tone control setups or help debug a specific flow!


Got to hn frontpage and ignore comments on the post...


and made three accounts to add more praise lol. This should be removed.


Is this running in production at any site/company?.


Yes, VideoSDK Real-Time AI Agents are already running in production with several partners across different domains — from healthcare assistants to customer support agents and AI companions. These deployments are handling real user interactions at scale, across web, mobile, and even telephony.

If you're curious about specific use cases or want to explore how it can fit into your product, happy to share more details or walk through an example.


Do you watermark the output to enable fraud detection?


Why would I use this vs @openai/openai-agents-python (or openai-agents-ts) - the new realtime agents SDKs?

There are so many AI frameworks out there that live & die so quickly that I am generally hard pressed to use any of these unless there is some killer feature I absolutely need.


Totally fair. The space moves fast, and it's smart to be skeptical. Here's how VideoSDK Real-Time AI Agents stand out from OpenAI agents SDKs and others:

1. Voice infra included OpenAI agents handle logic and memory, but they don’t include real-time audio infra.

VideoSDK gives you:

- <80ms global WebRTC latency

- Built-in turn-taking, VAD, and noise suppression

- Real-time voice across web, mobile, IoT, and telephony

2. Fully modular pipeline No vendor lock-in. Swap STT, LLM, TTS, and avatars. Change models live per user or use case. Want ElevenLabs for tone and OpenAI for reasoning? Easy.

3. Native RAG + memory Integrated long-term memory and retrieval help reduce hallucinations and keep conversations grounded.

4. Scale-ready Deploy globally with one click using Agent Cloud or self-host with full control. Built for production use.

If you're building real-time, voice-first agents that need to work across platforms and scale reliably, this is purpose-built for that.

Happy to dive into your use case if you're exploring options.


We're not a model ourselves—we provide the infrastructure that enables you to deploy and use any model of your choice, while simplifying communication through AI agents.


No demo? No demo video? Nothing?


Hey! Quick video overview: https://www.youtube.com/watch?v=m_oc1GDyhrc

Live demo to try it out: https://aiagent.tryvideosdk.live


how does it compare to chatterbox TTS? https://github.com/resemble-ai/chatterbox/


Chatterbox is great for local/private TTS with Resemble AI.

voice agent SDK is broader it's full real-time voice infra with STT, LLM, TTS, memory, and RAG built in. You can plug in Resemble, ElevenLabs, etc., and deploy across web, mobile, and telephony with <80ms latency.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: