Hey everyone. We’ve gotten your feedback and heard your concerns; we were less than artful in expressing our intentions. Many of the things that people are worried about are not things that we plan to pursue, and we should have been more clear.
First off, we’re going to delay the effective date of our TOS change indefinitely until we’ve satisfied the shared concerns.
As part of these changes, we agree that we need to provide a better consent mechanism for many of our customers. When reviewing this, we quickly realized the majority of what we’re looking to accomplish (e.g. improving our fingerprinting heuristics) is being clouded by the less understood hypothetical investments we’d like to explore.
We will not push customers to agree now to use cases that are not well-defined or understood. Rather, we commit to offering a consent mechanism for those future hypothetical use cases when we can clearly define them so customers have the opportunity to evaluate whether the feature is valuable enough to them to contribute their data. That could include, for example, a system which might use customer data to train a large language model. Additionally, we’re exploring an opt-out mechanism for other uses of data, such as training a heuristic model to better group errors together. While the implications of these two applications are very different, we understand customers’ desire for more control in how their data is used.
We need some time to explore what these changes would look like and how we would implement them in a way that stays true to protecting our customers.
I'm posting the update here for visibility:
----
Hey everyone. We’ve gotten your feedback and heard your concerns; we were less than artful in expressing our intentions. Many of the things that people are worried about are not things that we plan to pursue, and we should have been more clear.
First off, we’re going to delay the effective date of our TOS change indefinitely until we’ve satisfied the shared concerns.
As part of these changes, we agree that we need to provide a better consent mechanism for many of our customers. When reviewing this, we quickly realized the majority of what we’re looking to accomplish (e.g. improving our fingerprinting heuristics) is being clouded by the less understood hypothetical investments we’d like to explore.
We will not push customers to agree now to use cases that are not well-defined or understood. Rather, we commit to offering a consent mechanism for those future hypothetical use cases when we can clearly define them so customers have the opportunity to evaluate whether the feature is valuable enough to them to contribute their data. That could include, for example, a system which might use customer data to train a large language model. Additionally, we’re exploring an opt-out mechanism for other uses of data, such as training a heuristic model to better group errors together. While the implications of these two applications are very different, we understand customers’ desire for more control in how their data is used.
We need some time to explore what these changes would look like and how we would implement them in a way that stays true to protecting our customers.
Thanks for bearing with us.