Hacker Newsnew | past | comments | ask | show | jobs | submit | Mars008's commentslogin

Half a dozen of NVidia cards in more than a decade on on Win/Lin. No major problems so far.. I had to install / remove drivers manually but only because I needed exact versions for some other software. Intel on Win/Lin works fine too.

> approximately equal the price of Opus tokens needed to build

this is probably not accidental.


There are old, disabled, sick who rather by online than walk. Normally I walk about a mile to grocery store several times a week. But when sick Amazon fresh or whole food is the best price/quality/time option.

> They are born knowing how to walk

Most animals know how to walk. They have pre-build 'knowledge' and start with it when they get muscles. The main difference is some species develop muscles before they are borne. Others don't, some can't even see. But as soon as they develop they can walk without even seeing how others do it. The same way birds can learn, or be trained to fly without seeing examples. They start flapping both wings in sync, this is pre-build.


If I remember correctly after leaving OpenAI with a bang Ilya founded a company and attracted billions of $$ promising AGI soon. Now what?


> Why does Microsoft keep releasing models trained on synthetic data?

Why not? That's the way to go. In some domains the only way to go.


> When talking with non-tech people around me, it’s really not about “rational minds”, it’s that people really don’t understand how all this works and as such don’t see the limitations of it.

What are the limits? We know the limits for naked LLMs. Less so for LLM + current tools. Even less for LLM + future tools. And can only guess about LLM + other models + future tools. I mean moving forward likely requires complexity, research and engineering. We don't know the limits of this approach even without any major breakthrough. Can't predict, but if breakthrough happens it all will be different, but better than (we can foresee) today.


The price, power, and size. Make it cheap, low power, and small enough for mobile. One way to do this is inference in 4, 2, 1 bit. Also GPUs are parallel, most tasks can be split on several GPUs. Just by adding they you can scale up to infinity. In theory. So datacenters aren't going anywhere, they will still dominate.

Another way is CPU+ + fast memory, like Apple does. It's limited but power efficient.

Looks like with ecosystem development we need the whole spectrum from big models+tools running on datacenters to smaller running locally, to even smaller on mobile devices and robots.


My point is that revising autodiff is overdue.


* revisiting


He has no money


> I don't want to be too harsh on the study authors

Well, I'll do it for you. There is much of attention grabbing bull*it. For example I've seen on LinkedIn study claiming 60% of Indians daily using AI in their jobs, and only 10% of Japanese. You can guess who did it, very patriotic, but far from the reality.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: