Hacker Newsnew | past | comments | ask | show | jobs | submit | rschwabco's commentslogin

A version supporting Pinecone directly is coming soon!


These benchmarks seem off the charts. How is this accomplished?


They are off the charts. We're really proud of our baby.

The short answer is that we built a database from the ground up (only one external dependency) in Rust to leverage a universalized data architecture model we created. That's all we feel comfortable saying at the moment.

But if you're interested we can give you an API key to test it yourself (on any dataset you'd like) early next week. We're also going to drop a video on query features later this week, which I'll post here as well.

We also applied to YC, so any love here would be appreciated.


That's cool! What's neat about Topaz is that it combines the Zanzibar approach and the policy-as-code approach. That allows you to use the ReBAC, ABAC and RBAC authz patterns interchangeably.


What can I do with policies I create in this playground?


You can use the OPA CLI [0] or the policy CLI [1] to build and run the policies.

[0] https://www.openpolicyagent.org/docs/edge/cli/#opa-build

[1] https://github.com/opcr-io/policy


Can't I just use Auth0 for authorization?


Auth0 is a great developer API for authentication, and Aserto picks up where Auth0 leaves off. The "contract" between the authentication system (Auth0) and the authorization system (Aserto) is a signed JWT.

You can get away with very simple access control using scopes embedded in a JWT token, but that approach runs out of room pretty quickly [0]

With Aserto, you can write authorization rules that are evaluated for every application request, and reason about the user attributes, the operation, and any resource context that is involved in the authorization decision.

[0] https://www.aserto.com/blog/oauth2-scopes-are-not-permission...


As an ex-Auth0 I was watching Aserto for a while - it is indeed elegantly designed to naturally pick up where Auth0 leaves you. I wish this was available when we were adding authorization to Fusebit APIs. But well, next startup...


Thanks for the kind words! Maybe next time you look at evolving your authorization model, we can chat :)


Auth0 is building something pretty cool akin to Zanzibar

https://play.fga.dev/


There are at least two startups which are re-implementing something like zanzibar. In my experience, RBAC is too simplistic for even not so advanced apps. For example, with RBAC alone you can't express «User X can edit object Y», if object Y is one of many objects in a collection. Zanzibar – and any system building on top of tuples (subject, relation, object) – can express that quite intuitively.


RBAC is simple to get started with, but indeed pretty limited. We tend to use the term because it's more recognizable than ABAC or ReBAC.

The {subject,relation,object} tuples do provide a convenient way to express an ACL-based system.

Most real-world systems we've encountered tend to have a combination of user-centric and resource-centric aspects to them. With an ABAC-style policy, you can easily enforce relationships like "user X can edit objects in project Y, and can read objects in project Z". In fact, the Aserto policy for Aserto [1] uses this style of authorization, without going "full-tuple".

In fact, for many use-cases, the prospect of creating an ACL for every resource feels like a management nightmare for the folks we've talked to, and they typically have a "resource group" construct or hierarchy that they want to treat the same from an authorization perspective.

Finally, in addition to the user model, Aserto has a resource model, and we're exploring evolving it more towards the tuple approach.

[1] https://github.com/aserto-dev/policy-aona


Zanzibar has a notion of «User sets» to allow expressing conditions such as «File X can be viewed by (all users who can view Folder Y)» to avoid too much duplication in the rule sets. One challenge they had was to make resolving these rules not too slow, since that forms basically a graph (= graph traversal)


Reading the Zanzibar paper is actually how I learned about how powerful of an optimization technique denormalization could be


It's definitely a good paper :)

Denormalization has been around since Date/Codd invented 6NF and relational databases, and then we all realized that most applications have to precompute some joins in order to execute in a performant fashion. In SQL Server we used to call them "materialized views".


Yes, flattening the graph is essential to getting reasonable performance.

A separate (difficult) problem is to keep all the tuple data consistent with the data in your store (often these contain duplicate info).


Looks like I need an account to use the registry. What information do you collect?


OPCR uses Dex [1] for its IDP, and federates with github through an OAuth2 flow. We only ask for public github information.

[1] https://github.com/dexidp/dex


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: