> The adoption rate for the iPhone was slow. There were only 1.4 million iPhones sold in its first year,[1] whereas there were 100 million weekly active ChatGPT users in its first year.[2]
The ChatGPT number includes people who paid no money. iPhone adoption was incredibly fast for a paid product
I'm not really commenting on that, I'm saying the practice is good for me as an interviewee.
However I do think it's a good way to filter candidates. I should clarify that what I'm talking about is fairly basic programming tasks, not very hard leet code style DSA type tasks. I've never been given an actually hard task in an interview, they've all been fairly simple tasks like write a bracket tax calculator, write a class that stores car objects and can get them by plate number and stuff like that. Helped a friend do a take-home one where we fetched some data from spacex's api and displayed it in a html table.
Every time I do these, people act like I'm Jesus for solving a relatively simple task. Meanwhile I'm just shocked that this is something my peers struggle with. I would have honestly expected any decent dev to be able to do these with roughly the same proficiency as myself, but it turns out almost nobody can.
That's why I think it's a good way to test candidates. If you're going to work as a programmer you should be able to solve these types of tasks. I don't care if you're frontend, backend, finance, healthcare, data science, whatever kind of programming you normally do, you should be able to do these kinds of things.
If someone can't then by my judgement they don't really know programming. They may have figured out some way to get things done anyway but I bet the quality of their work reflects their lack of understanding. I've seen a lot of code written by this kind of people, it's very clear that a lot of developers really don't understand the code they're writing. It's honestly shocking how bad most "professional software developers" are at writing simple code.
If one is trying to make an argument about the usefulness of LLMs, it’s irrelevant whether LLMs on their own can cite sources. If they can be trivially put into a system that can cite sources, that is a better measure of it’s usefulness.
I mean, it’s not trivial. There is a lot of work involved with enabling tool use at scale so that it works most of the time. Hiding that work makes it worse for the common user, because they aren’t necessarily going to understand the difference between platforms.
> The more interesting part for me (esp as a computer vision at heart who is temporarily masquerading as a natural language person) is whether pixels are better inputs to LLMs than text. Whether text tokens are wasteful and just terrible, at the input.
> Maybe it makes more sense that all inputs to LLMs should only ever be images.
So, what, every time I want to ask an LLM a question I paint a picture? I mean at that point why not just say "all input to LLMs should be embeddings"?
From the post he's referring to text input as well:
> Maybe it makes more sense that all inputs to LLMs should only ever be images. Even if you happen to have pure text input, maybe you'd prefer to render it and then feed that in:
Italicized emphasis mine.
So he's suggesting that/wondering if the vision model should be the only input to the LLM and have that read the text. So there would be a rasterization step on the text input to generate the image.
Thus, you don't need to draw a picture but generate a raster of the text to feed it to the vision model.
All inputs being embeddings can work if you have embedding like Matryoshka, the hard part is adaptively selecting the embedding size for a given datum.
And this is because Spotify has a free, ad-based tier that pays way less than the paid premium listeners. Whereas Apple Music is all paying users (ignored the limited free trial)
It’s quite easy to produce a model that’s better than GPT-5 at arbitrarily small tasks. As of right now, GPT-5 can’t classify a dog by breed based on good photos for all but the most common breeds, which is like an AI-101 project.
Try doing a head to head comparison using all LLM tricks available including prompt engineering, rag, reasoning, inference time compute, multiple agents, tools, etc
Then try the same thing using fine tuning. See which one wins. In ML class we have labeled datasets with breeds of dogs hand labeled by experts like Andrej, in real life users don’t have specific, clearly defined, and high quality labeled data like that.
I’d be interested to be proven wrong
I think it is easy for strong ML teams to fall into this trap because they themselves can get fine tuning to work well. Trying to scale it to a broader market is where it fell apart for us.
This is not to say that no one can do it. There were users who produced good models. The problem we had was where to consistently find these users who were willing to pay for infrastructure.
I’m glad we tried it, but I personally think it is beating a dead horse/llama to try it today
I mean, at the point where you’re writing tools to assist it, we are no longer comparing the performance of 2 LLMs. You’re taking a solution that requires a small amount of expertise, and replacing it with another solution that requires more expertise, and costs more. The question is not “can fine tuning alone do better than every other trick in the book plus a SOTA LLM plus infinite time and money?” The question is: “is fine tuning useful?”
> How can you hire enough people to scale that while making the economics work?
Once you (as in you the person) have the expertise, what you need all the people for exactly? To fine-tuning you need to figure out the architecture, how to train, how to infer, pick together the dataset and then run the training (optionally setup a pipeline so the customer can run the "add more data -> train" process themselves). What in this process you need to hire so many people for?
> Why would they join you rather than founding their own company?
Same as always, in any industry, not everyone wants to lead and not everyone wants to follow.
The problem is that it doesn’t always work and when it does fail it fails silently.
Debugging requires knowing some small detail about your data distribution or how you did gradient clipping which take time and painstakingly detailed experiments to uncover.
> The problem is that it doesn’t always work and when it does fail it fails silently.
Right, but why does that mean you need more employees? You need to figure out how to surface failures, rather than just adding more meat to the problem.
This is nice. I really like Kagi and while I don’t really need something like Ente, I’ll probably end up signing up for other services highlighted here.
Ebikes and small scooters make up the vast majority of deliveries in major cities. Can't remember the last time a delivery driver came in a car in NYC.
The ChatGPT number includes people who paid no money. iPhone adoption was incredibly fast for a paid product
reply