Putting the merits of this specific case and positive vs. negative sentiments toward OpenAI aside, this tactic seems like it can be used to destroy any business or organization with customers who place a high value on privacy—without actually going through due process and winning a lawsuit.
Imagine a lawsuit against Signal that claimed some nefarious activity, harmful to the plaintiff, was occurring broadly in chats. The plaintiff can claim, like NYT, that it might be necessary to examine private chats in the future to make a determination about some aspect of the lawsuit, and the judge can then order Signal to find a way to retain all chats for potential review.
However you feel about OpenAI, this is not a good precedent for user privacy and security.
I'm confused at how you think that NYT isn't going through due process and attempting to win a lawsuit.
The court isn't saying "preserve this data forever and ever and compromise everyone's privacy," they're saying "preserve this data for the purposes of this court while we perform an investigation."
IMO, the NYT has a very good argument here that the only way to determine the scope of the copyright infringement is to analyze requests and responses made by every single customer. Like I said in my original comment, the remedies for copyright infringement are on a per-infringement basis. E.g., everytime someone on LimeWire downloads Song 2 by Blur from your PC, you've committed one instance of copyright infringement. My interpretation is that NYT wants the court to find out how many times customers have received ChatGPT responses that include verbatim New York Times content.
That's not entirely fair. The argument isn't "users are using the service to break the law" but rather "the service is facilitating law breaking". To fix your signal analogy suppose you could use the chat interface to request copyrighted material from the operator.
That doesn't change the outcome being the same in that the app has to send the plain text messages of everyone, including the chat history of every user.
Right. But requiring logs due to suspicion that the service itself is actively violating the law is entirely different from doing so on the basis that end users might be up to no good entirely independently.
Also OpenAI was never E2EE to begin with. They were already retaining logs for some period of time.
My personal view is that the court order is overly broad and disregards potential impacts on end users but it's nonetheless important to be accurate about what is and isn't happening here.
Again keep in mind that we are talking about a case limited analysis of that data within the privacy of the court system.
For example, if the trial happens to find data that some chats include crimes committed by users in their private chats, the court can't just send police to your door based on that information since the information is only being used in the context of an intellectual property lawsuit.
Remember that privacy rights are legitimate rights but they change a lot when you're in the context of an investigation/court proceeding. E.g., the right of police to enter and search your home changes a lot when they get a court issued warrant.
The whole point of E2EE services from the perspective of privacy-concious customers is that a court can get a warrant for data from those companies but they'll only be able to produce encrypted blobs with no access to decryption keys. OpenAI was always a not-E2EE service, so customers have to expect that a court order could surface their data to someone else's eyes at some point.
I think the issue is that "easier to follow the happy flow" basically also implies "easier to ignore the unhappy flow", and this is what a lot of people who have come to like Go's approach react against. We like that the error case takes up space—it's as important to deal with correctly as the happy path, so why shouldn't it take up space?
Is it 3x as important? Because currently, for a single line of happy path code you often have 3 lines of error handling code.
And while handling errors is important, it is also often trivial, just optionally wrapping the error and returning it up to the caller. I agree it is good for that to be explicit, but I don't think it needs to take up more space than the actual function call.
In my experience, the important error handling code is either at the lowest layer, where the error initiall occurs, or at the top level, where the error is reported to the user. Mid level code usually just propagates errors upwards.
Stack traces are full of noise by comparison and don't have context added by the programmer at each frame. For me, Go error chains are much easier to work with. I can see the entire flow of the error at a glance, and can zero in on the relevant call in seconds with a single codebase search.
Stack traces are not so long that you can't find the information you need in them, and actually just like go in some languages you can elect to add context to your stack trace (e.g. in python by raising an error from another error).
My experience in go was opposite of yours. The original devs (who were long gone) provided no information at all at the error site and I felt lucky even to find the place in the code that produced the error. Unfortunately the "force you to handle errors" idea, while well intentioned, doesn't "force you to provide useful error handling information", making it worse than stack traces by default.
They clearly are wrestling with these issues, which to me seems like taking the feedback seriously. Taking feedback seriously doesn’t imply you have to say yes to every request. That just gives you a design-by-committee junk drawer language. We already have enough of those, so personally I’m glad the Go team sticks to its guns and says no most of the time.
How is Go not a design-by-committee language? They don't have a single lead language developer or benevolent dictator, and as this blog demonstrates, they're very much driven by consensus.
I have no problem with Go’s error handling. It’s not elegant, but it works, and that’s very much in keeping with the pragmatic spirit of Go.
I’m actually kind of surprised that it’s the top complaint among Go devs. I always thought it was more something that people who don’t use Go much complain about.
My personal pet issue is lack of strict null checks—and I’m similarly surprised this doesn’t get more discussion. It’s a way bigger problem in practice than error handling. It makes programs crash in production all the time, whereas error handling is basically just a question of syntax sugar. Please just give me a way to mark a field in a struct required so the compiler can eliminate nil dereference panics as a class of error. It’s opt-in like generics, so I don’t see why it would be controversial to anyone?
It's too easy to accidentally write `if err == nil` instead of `if err != nil`. I have even seen LLMs erroneously generate the first instead of the latter. And since it's such a tiny difference and the code is riddled with `if err != nil`, it's hard to catch at review time.
Second, you're not forced by the language to do anything with the error at all. There are cases where `err` is used in a function that not handling the `err` return value from a specific function silently compiles. E.g.
x, err := strconv.Atoi(s1)
if err != nil {
panic(err)
}
y, err := strconv.Atoi(s2)
fmt.Println(x, y)
I think accidentally allowing such bugs, and making them hard to spot, is a serious design flaw in the language.
I guess those are fair criticisms in the abstract, but personally I can’t recall a single time either has caused a bug for me in practice. I also can’t ever recall seeing an LLM or autocomplete mess it up (just my personal experience—I’m sure it can happen).
> It’s opt-in like generics, so I don’t see why it would be controversial to anyone?
It "breaks" the language in fundamental ways — much more fundamental than syntactic sugar for error handling — by making zero values and thus zero initialisation invalid.
You even get this as a fun interaction with generics:
I don’t see how it breaks anything if it’s opt-in. By default you get the current behavior with zero value initialization if that’s what you want (and in many cases it is). But if you’d rather force an explicit value to be supplied, what’s the harm?
If it would only complain on struct literals that are missing the value (and force a nil check before access if the zero value is nil to prevent panics), that would be enough for me. In that case, your Zero function and reflect.Zero can keep working as-is.
I'd add that you have to be careful with this approach that you don't just outsource the design to the customer.
Customers give valuable feedback, but it's rarely a good idea to implement their ideas as-is. Usually you want to carefully consider problems/friction/frustration that they bring up, but take their suggested solutions with a grain of salt.
This can be harder than it sounds, because customers who give the best feedback are often very opinionated, and you naturally want to "reward" them by including exactly what they ask for.
I think the key thing is to keep iterating and experimenting. Keep posting into the void, but don't keep doing it the same way every time. If your tweets get 5 views, don't just keep tweeting. Try a different platform, or target the tweet at a community/niche, or try presenting the post/content in a different way, etc. If you find something that works even marginally better, double down on that.
Often the people who seem to suddenly "make it" are doing this, but it gets left out of the story.
One way is to eagerly call JSON.parse as fragments are coming in. If you also split on json semantic boundaries like quotes/closing braces/closing brackets, you can detect valid objects and start processing them while the stream continues.
I have a 6 year old daughter—got her a lego boost robot kit recently and she seems to be taking to the programming aspect. It’s cool to watch her experimenting. It has a nice graphical block based programming environment that is pretty intuitive for her with an ipad. Makes programming concepts very concrete/tangible.
It’s fun for me too since learning to program robots has always been on my bucket list. Chatgpt helps since even though it’s meant to be intuitive, you still run into various issues pretty often, and documentation is scarce. Sending screenshots to o3 works amazingly well to get unstuck.
Yeah, and I think it’s also simply that inference with strong models is expensive.
OpenAI is lighting boatloads of money on fire to provide the ChatGPT free version. Same with Google for their search results AI, and perplexity which has also raised a lot. Unless you can raise a billion and find a unique wedge, it’s hard to even be in the game.
You can try to use small cheap models, but people will notice that free ChatGPT is 10x better.
Imagine a lawsuit against Signal that claimed some nefarious activity, harmful to the plaintiff, was occurring broadly in chats. The plaintiff can claim, like NYT, that it might be necessary to examine private chats in the future to make a determination about some aspect of the lawsuit, and the judge can then order Signal to find a way to retain all chats for potential review.
However you feel about OpenAI, this is not a good precedent for user privacy and security.
reply