It's a good post and I strongly agree with the part about level setting. You see the same tired arguments basically every day here and subreddits like /r/ExperiencedDevs. I read a few today and my favorites are:
- It cannot write tests because it doesn't understand intent
- Actually it can write them, but they are "worthless"
- It's just predicting the next token, so it has no way of writing code well
- It tries to guess what code means and will be wrong
- It can't write anything novel because it can only write things it's seen
- It's faster to do all of the above by hand
I'm not sure if it's the issue where they tried copilot with gpt 3.5 or something, but anyone who uses cursor daily knows all of the above is false, I make it do these things every day and it works great. There was another comment I saw here or on reddit about how everyone needs to spend a day with cursor and get good at understanding how prompting + context works. That is a big ask but I think the savings are worth it when you get the hang of it.
Yes. It's this "next token" stuff that is a total tell we're not all having the same conversation, because what serious LLM-driven developers are doing differently today than they were a year ago has not much at all to do with the evolution of the SOTA models themselves. If you get what's going on, the "next token" thing has nothing at all to do with this. It's not about the model, it's about the agent.
You can ask it to do this, in your initial prompt encourage it to ask questions before implementing if it is unsure. Certain models like o4 seem to do this more by default rather than Claude that tends to try to do everything without clarifying
> It does support some kind of view mapping technique
Can you call .Select(entity => SomeSmallerModel() { Name = entity.Name }) or something like that to select what you need? If I am understanding your issue correctly.
I also agree that its one of the least worst but there are still things that annoy me.
Problem is most users will not care or understand it. Someone will spoof the dialog without the special icon or phrase and users would still enter the password.
The only real example I can think of is the US options market feed. It is up to something like 50 GiB/s now, and is open 6.5 hours per day. Even a small subset of the feed that someone may be working on for data analysis could be huge. I agree CSV shouldn't even be used here but I am sure it is.
It looks like those horribly edited gift mugs I see on amazon occasionally, where someone just puts the image over the mug without accounting for the 3D shape. Too many variants to actually take the image. Would have been an excellent example to show how much better AI is if they made it do that.
That one is an odd example.. especially since image #3 does a similar task with excellent accuracy in keeping the old image intact. I've had the same issues when trying to make it visualize adding decor, it ends up changing the whole room or furniture materials.
This is sorta what customers expect. When they want a refund or have a problem, they ask the developer. They don't understand the payments are all going through apple AND that apple is responsible for all billing support. There is no way to even look up a customer or get a list of your customers with IAP, aside from using something like RevenueCat that tries to link your user accounts with the device receipts to figure out who is subscribed. Customers find it ridiculous that you can't help them with any billing questions at all.
- It cannot write tests because it doesn't understand intent
- Actually it can write them, but they are "worthless"
- It's just predicting the next token, so it has no way of writing code well
- It tries to guess what code means and will be wrong
- It can't write anything novel because it can only write things it's seen
- It's faster to do all of the above by hand
I'm not sure if it's the issue where they tried copilot with gpt 3.5 or something, but anyone who uses cursor daily knows all of the above is false, I make it do these things every day and it works great. There was another comment I saw here or on reddit about how everyone needs to spend a day with cursor and get good at understanding how prompting + context works. That is a big ask but I think the savings are worth it when you get the hang of it.
reply