Hacker News new | past | comments | ask | show | jobs | submit | theogravity's comments login

This is really cool considering how expensive DataDog can get. I'm the author of LogLayer (https://loglayer.dev), which is a structured logger for TypeScript that allows you to use multiple loggers together. I've written transports that allows shipping to other loggers like pino and cloud providers such as DataDog.

I spent some time writing an integration for HyperDX after seeing this post and hope you can help me roll it out! Would love to add a new "integrations" section to my page that links to the docs on how to use HyperDX with LogLayer.

https://github.com/hyperdxio/hyperdx-js/pull/184


Could you add the ability to ship logs to VictoriaLogs? The logs can be shipped via one of the supported data ingestion protocols - https://docs.victoriametrics.com/victorialogs/data-ingestion...

Definitely possible. HTTP direct send is a common pattern that I'm working on an HTTP-specific transport that can be used for these use-cases (vs making a unique impl for each one).

I'll reach out when this is available.


Hey this looks awesome! We will take a look at it

I think FusionAuth does something similar. They have a global user, and uses the notion of tenants / application registrations (which I think is comparable to a Tesseral Organization) to segment the same user.

Then you can define applications (which are mapped 1:1 to tenants) where a user has a registration entry against that application, where a user can be referenced by their global user id, or application-specific user id.

Applications are OAuth2 applications (meaning a dedicated client id / secret), so we only create a single application and tenant, and maintain organization segmentation on our own application / db side instead.

(We're paying customers of FusionAuth. Anyone from FusionAuth, feel free to correct me.)


I totally recommend the Basement Brothers YouTube channel which has a large set of reviews with summarized playthroughs and historical background for PC-88 and 98 games:

https://www.youtube.com/watch?v=96tLZTtNcZA&list=PL_W1EM66_B...


Does it handle:

- Federated sign-in/out? In next-auth, it is a giant pain to implement: https://github.com/nextauthjs/next-auth/discussions/3938

- Automated refreshing of JWT tokens on the client-side? I always end up having to implement my own logic around this. The big problem is if you have multiple API calls going out and they all require JWT auth, you need to check the JWT validity and block the calls until it is refreshed. In next-auth on the server-side, this is impossible to do since that side is generally stateless, and so you end up with multiple refresh calls happening for the same token.

- The ability to have multiple auth sessions at once, like in a SaaS app where you might belong to multiple accounts / organizations (your intro paragraph sounds like it does)

- Handle how multiple auth sessions are managed if the user happens to open up multiple tabs and swaps accounts in another tab

- Account switching using a Google provider? This seems to be a hard ask for providers like FusionAuth and Cognito. You can't use the Google connector directly but instead use a generic OAuth2 connector where you can specify custom parameters when making the initial OAuth2 flow with Google. The use-case is when a user clicks on the Google sign-in button, it should go to the Google account switcher / selector instead of signing in the user immediately if they have an existing signed-in Google session.


- Not right now, but there’s already an open issue and a PR in progress.

- We don’t use JWTs directly, and sessions always require state (it’s not stateless). And yeah, both the client and server handles automatic session refresh.

- Yes, we support both multiple sessions or having different organizations open in different tab: https://www.better-auth.com/docs/plugins/multi-session

- Yes, that’s possible, you just need to set the `prompt` parameter to `select_account`


As another asked, "why?" on no JWT? It makes interfacing with our API servers so much easier as we don't need to maintain infra for sessions and wouldn't be limited by the 4kb limit for sending cookies.


I use better auth for a real app

There is a plugin provided by better auth for jwt https://www.better-auth.com/docs/plugins/jwt

We dont need it since everything is a single "server" and cookies are good enough. JWT would be added complexity ( e.g sign out ) that i find it better to not be set as a default.

bonus reading http://cryto.net/~joepie91/blog/2016/06/19/stop-using-jwt-fo...


> We don’t use JWTs directly

Why?


Evidently they prefer to be less secure by default.


JWTs aren’t less or more secure by default see the comments posted above


How did you resolve the multiple refresh calls issue? Do you use swr hooks on the front end? Been thinking about how to do this myself.


No hooks on the FE side. We use a global lock via a promise. Our API clients are not tied to react in any way.

For all API calls, if the lock is not set, it checks if the JWT is still valid. If it is not, then the lock is set by assigning a new promise to it and saving the resolve call as an external variable to be called after the refresh is done (which resolves the held promise on the other calls, allowing the latest token to be used).

All calls await the lock; it either waits for the refresh to complete or just moves on and performs validation with the currently set token.

Looks like this:

- await on lock; if the lock has been resolved, will just continue on

- Check for JWT validity via exp check (the API server itself would be responsible for checking signature and other validity factors); if not valid, update lock with a new promise and hold the resolver. Perform refresh. Release lock by resolving the promise.

- Use current / refreshed JWT for API call


That demo was pretty mesmerizing!



Just deleted my data. Who knows who will own it after this?


their own privacy team told me they are bound by regulatory obligations to retain data even after you request deletion. I've notified our attorney general's office to see if anything can be done but it might be too late. I'd love for someone who knows these "regulatory obligations" to chime in.....


Interesting, as that would be a violation of the CCPA/CPRA for customers who are California residents, at least.

The "regulatory obligations" sounds like a fake excuse. Is 23andme even regulated beyond how your regular average business might be?


Federal regulations trump state codes.


Yes, they have been regulated by the FDA for a decade now.


>their own privacy team told me they are bound by regulatory obligations to retain data even after you request deletion

They have to retain data about the person who requested the deletion which seems eminently reasonable. In the future if you sue them because you can't access your account that you paid for, they have a record that you requested said account's deletion.

Similarly they obviously can't withdraw your data from the anonymized research projects they pursued.


Well, purely going on who it is of value to, probably life insurance providers or pharmaceutical companies.


Still waiting for my "deletion confirmation e-mail". Hopefully it arrives.


I've had situations where Cursor just starts to do some really bizarre behavior after long running cycles of tasks unsuccessfully like the death loop I've seen described in other threads.

Best way to deal with this is to just clear the embedding index from the cursor settings and rebuild it.

I've never had it go to a point where it will want to rf home, but now I'm a bit fearful that one day it will go and do it as I have it on auto run currently.


Is there a guide for how to use uv if you're a JS dev coming from pnpm?

I just want to create a monorepo with python that's purely for making libraries (no server / apps).

And is it normal to have a venv for each library package you're building in a uv monorepo?


If the libraries are meant to be used together, you can get away with one venv. If they should be decoupled, then one venv per lib is better.

There is not much to know:

- uv python install <version> if you want a particular version of python to be installed

- uv init --vcs none [--python <version>] in each directory to initialize the python project

- uv add [--dev] to add libraries to your venv

- uv run <cmd> when you want to run a command in the venv

That's it, really. Any bonus can be learned later.


There's also workspaces (https://docs.astral.sh/uv/concepts/projects/workspaces/) if you have common deps and it's possible to act on a specific member of the workspace as well.


That's one of the bonus I was thinking about. It's nice if you have a subset of deps you want to share, or if one dep is actually part of the monorepo, but it does require more to know.


Thanks. Why is the notion of run and tool separate? Coming from JS, we have the package.json#scripts field and everything executes via a `pnpm run <script name>` command.


Tool ?

Maybe you mean uv tool install ?

In that case it's something you don't need right now, uv tool is useful, but it's a bonus. It's to install 3rd party utilities outside of the project.

There is no equivalent to script yets, althought they are adding it as we speak.

uv run exec any command in the context of the venv (which is like a node_modules), you don't need to declare them prior to calling them.

e.g: uv run python will start the python shell.


I was looking at https://docs.astral.sh/uv/concepts/tools/#the-uv-tool-interf...

Edit: I get it now. It's like npm's `npx` command.


uvx is the npx equivalent, it's provided with ux, and also has some nice bonuses.


uv sync if you clone a github repo


uv run in the freshly cloned repo will create the venv and install all deps automatically.

You can even use --extra and --group with uv run like with uv sync. But in a monorepo, those are rare to use.


Thanks for the info.

I looked at the group documentation, but it's not clear to me why I would want to use it, or where I would use it:

https://docs.astral.sh/uv/concepts/projects/layout/#default-...

(I'm a JS dev who has to write a set of python packages in a monorepo.)


sync is something you would rarely use, it's most useful for scripting.

uv run is the bread and meat of uv, it will run any command you need in the project, and ensure it will work by synching all deps and making sure your command can import stuff and call python.

In fact, if you run a python script, you should do uv run python the_script.py,

It's so common uv run the_script.py will work as a shortcut.

I will write a series of article on uv on bitecode.dev.

I will write it so that it work for non python devs as well.


Did you mean group and not sync?

Really looking forward to the articles!


Sorry i misread and stayed on sync. Group and extras are for lib makers to create sets of optionals dependenacies. Groups are private ones for maintainers, extras are oublic one for users.


Agreed, if you don't know what Datadog is then you're probably not the target audience for this product.


Do you think if I don't know what datadog is, I am not the target audience for datadog?


Kinda? There aren't that many players in this niche and datadog is the "dog".


probably


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: