Hey everyone. We’ve gotten your feedback and heard your concerns; we were less than artful in expressing our intentions. Many of the things that people are worried about are not things that we plan to pursue, and we should have been more clear.
First off, we’re going to delay the effective date of our TOS change indefinitely until we’ve satisfied the shared concerns.
As part of these changes, we agree that we need to provide a better consent mechanism for many of our customers. When reviewing this, we quickly realized the majority of what we’re looking to accomplish (e.g. improving our fingerprinting heuristics) is being clouded by the less understood hypothetical investments we’d like to explore.
We will not push customers to agree now to use cases that are not well-defined or understood. Rather, we commit to offering a consent mechanism for those future hypothetical use cases when we can clearly define them so customers have the opportunity to evaluate whether the feature is valuable enough to them to contribute their data. That could include, for example, a system which might use customer data to train a large language model. Additionally, we’re exploring an opt-out mechanism for other uses of data, such as training a heuristic model to better group errors together. While the implications of these two applications are very different, we understand customers’ desire for more control in how their data is used.
We need some time to explore what these changes would look like and how we would implement them in a way that stays true to protecting our customers.
Unfortunately this is becoming a common trend - using user data for training purposes or selling it. The fiasco with Carta (even though it’s not for AI purposes) is another example. The majority of users will not care and the minority that do isn’t enough to spark an outrage or lead to any change
Are there any alternatives that are as comprehensive and nice to use as Sentry?
I’ve been growing more and more disappointed with the JS bundle size growth over the last few years, some of which can be mitigated by webpack config hacks (https://docs.sentry.io/platforms/javascript/guides/nextjs/co...) but it’s still quite a large portion of the production bundle.
As others have said, and I would agree, that title isn't misleading as any day is "user data" and your argument revolves around the semantics of what "personal data" is (or isn't).
Beyond that I think it's fair for folks using Sentry to be disappointed in the decision. It's very hard to classify data types to be excluded and much easier to go down the "inclusion" route. But in this case Sentry can later argue that, whoops, these data were incorrectly classified because the user interpreted our Rube Goldberg machine incorrectly. Par for the course with respect to anti-patterns.
I'm posting the update here for visibility:
----
Hey everyone. We’ve gotten your feedback and heard your concerns; we were less than artful in expressing our intentions. Many of the things that people are worried about are not things that we plan to pursue, and we should have been more clear.
First off, we’re going to delay the effective date of our TOS change indefinitely until we’ve satisfied the shared concerns.
As part of these changes, we agree that we need to provide a better consent mechanism for many of our customers. When reviewing this, we quickly realized the majority of what we’re looking to accomplish (e.g. improving our fingerprinting heuristics) is being clouded by the less understood hypothetical investments we’d like to explore.
We will not push customers to agree now to use cases that are not well-defined or understood. Rather, we commit to offering a consent mechanism for those future hypothetical use cases when we can clearly define them so customers have the opportunity to evaluate whether the feature is valuable enough to them to contribute their data. That could include, for example, a system which might use customer data to train a large language model. Additionally, we’re exploring an opt-out mechanism for other uses of data, such as training a heuristic model to better group errors together. While the implications of these two applications are very different, we understand customers’ desire for more control in how their data is used.
We need some time to explore what these changes would look like and how we would implement them in a way that stays true to protecting our customers.
Thanks for bearing with us.