That is the goal. We want to build something that truly understands and can take initiative. But we are designing it with strong guardrails, clear permissions, and full user control from the start. Intelligence without boundaries is neither useful nor safe.
We’re not open source yet, which is why the code isn’t available right now. We’re focused on getting the core product stable and secure first, but we’re definitely open to sharing parts of the stack over time.
We use an open-source vision-language model as the base and fine-tune it heavily for agent behavior. The intelligence is in how it reasons, acts, and adapts — not just in the raw model.
We’ve already optimized it for speed — most actions complete in 1 to 2 seconds in real usage
We’re still finalizing the privacy policy and will share it publicly soon. For now, no data is stored long-term, and we’re exploring both cloud and local processing options with user control in mind.
That's a very reasonable concern. Trust has to be earned, especially with software that can take actions on your behalf.
We’re starting with clear boundaries — no hidden actions, full visibility into what the agent is doing, and confirmations for anything sensitive. A local-only mode is in development for users who want full control.
This kind of tool only works if people feel safe using it, so building that trust is a core part of what we’re focused on, not just a nice-to-have.
You're right, trust is the hardest part, especially with software that can control your computer. We're focused on giving users transparency, control, and a clear view of what the agent is doing. Local mode and permission controls are on our roadmap. Really appreciate you highlighting this.