This is really cool considering how expensive DataDog can get. I'm the author of LogLayer (https://loglayer.dev), which is a structured logger for TypeScript that allows you to use multiple loggers together. I've written transports that allows shipping to other loggers like pino and cloud providers such as DataDog.
I spent some time writing an integration for HyperDX after seeing this post and hope you can help me roll it out! Would love to add a new "integrations" section to my page that links to the docs on how to use HyperDX with LogLayer.
Definitely possible. HTTP direct send is a common pattern that I'm working on an HTTP-specific transport that can be used for these use-cases (vs making a unique impl for each one).
I think FusionAuth does something similar. They have a global user, and uses the notion of tenants / application registrations (which I think is comparable to a Tesseral Organization) to segment the same user.
Then you can define applications (which are mapped 1:1 to tenants) where a user has a registration entry against that application, where a user can be referenced by their global user id, or application-specific user id.
Applications are OAuth2 applications (meaning a dedicated client id / secret), so we only create a single application and tenant, and maintain organization segmentation on our own application / db side instead.
(We're paying customers of FusionAuth. Anyone from FusionAuth, feel free to correct me.)
I totally recommend the Basement Brothers YouTube channel which has a large set of reviews with summarized playthroughs and historical background for PC-88 and 98 games:
- Automated refreshing of JWT tokens on the client-side? I always end up having to implement my own logic around this. The big problem is if you have multiple API calls going out and they all require JWT auth, you need to check the JWT validity and block the calls until it is refreshed. In next-auth on the server-side, this is impossible to do since that side is generally stateless, and so you end up with multiple refresh calls happening for the same token.
- The ability to have multiple auth sessions at once, like in a SaaS app where you might belong to multiple accounts / organizations (your intro paragraph sounds like it does)
- Handle how multiple auth sessions are managed if the user happens to open up multiple tabs and swaps accounts in another tab
- Account switching using a Google provider? This seems to be a hard ask for providers like FusionAuth and Cognito. You can't use the Google connector directly but instead use a generic OAuth2 connector where you can specify custom parameters when making the initial OAuth2 flow with Google. The use-case is when a user clicks on the Google sign-in button, it should go to the Google account switcher / selector instead of signing in the user immediately if they have an existing signed-in Google session.
- Not right now, but there’s already an open issue and a PR in progress.
- We don’t use JWTs directly, and sessions always require state (it’s not stateless). And yeah, both the client and server handles automatic session refresh.
As another asked, "why?" on no JWT? It makes interfacing with our API servers so much easier as we don't need to maintain infra for sessions and wouldn't be limited by the 4kb limit for sending cookies.
We dont need it since everything is a single "server" and cookies are good enough.
JWT would be added complexity ( e.g sign out ) that i find it better to not be set as a default.
No hooks on the FE side. We use a global lock via a promise. Our API clients are not tied to react in any way.
For all API calls, if the lock is not set, it checks if the JWT is still valid. If it is not, then the lock is set by assigning a new promise to it and saving the resolve call as an external variable to be called after the refresh is done (which resolves the held promise on the other calls, allowing the latest token to be used).
All calls await the lock; it either waits for the refresh to complete or just moves on and performs validation with the currently set token.
Looks like this:
- await on lock; if the lock has been resolved, will just continue on
- Check for JWT validity via exp check (the API server itself would be responsible for checking signature and other validity factors); if not valid, update lock with a new promise and hold the resolver. Perform refresh. Release lock by resolving the promise.
their own privacy team told me they are bound by regulatory obligations to retain data even after you request deletion. I've notified our attorney general's office to see if anything can be done but it might be too late. I'd love for someone who knows these "regulatory obligations" to chime in.....
>their own privacy team told me they are bound by regulatory obligations to retain data even after you request deletion
They have to retain data about the person who requested the deletion which seems eminently reasonable. In the future if you sue them because you can't access your account that you paid for, they have a record that you requested said account's deletion.
Similarly they obviously can't withdraw your data from the anonymized research projects they pursued.
I've had situations where Cursor just starts to do some really bizarre behavior after long running cycles of tasks unsuccessfully like the death loop I've seen described in other threads.
Best way to deal with this is to just clear the embedding index from the cursor settings and rebuild it.
I've never had it go to a point where it will want to rf home, but now I'm a bit fearful that one day it will go and do it as I have it on auto run currently.
That's one of the bonus I was thinking about. It's nice if you have a subset of deps you want to share, or if one dep is actually part of the monorepo, but it does require more to know.
Thanks. Why is the notion of run and tool separate? Coming from JS, we have the package.json#scripts field and everything executes via a `pnpm run <script name>` command.
sync is something you would rarely use, it's most useful for scripting.
uv run is the bread and meat of uv, it will run any command you need in the project, and ensure it will work by synching all deps and making sure your command can import stuff and call python.
In fact, if you run a python script, you should do uv run python the_script.py,
It's so common uv run the_script.py will work as a shortcut.
I will write a series of article on uv on bitecode.dev.
I will write it so that it work for non python devs as well.
Sorry i misread and stayed on sync. Group and extras are for lib makers to create sets of optionals dependenacies. Groups are private ones for maintainers, extras are oublic one for users.
I spent some time writing an integration for HyperDX after seeing this post and hope you can help me roll it out! Would love to add a new "integrations" section to my page that links to the docs on how to use HyperDX with LogLayer.
https://github.com/hyperdxio/hyperdx-js/pull/184
reply