Hacker Newsnew | past | comments | ask | show | jobs | submit | jomohke's commentslogin

They likely have other things to do.

In this quote I don't think he means it from the business side. He's claiming more data allows a better product:

> ... the answers are a statistical synthesis of all of the knowledge the model makers can get their hands on, and are completely unique to every individual; at the same time, every individual user’s usage should, at least in theory, make the model better over time.

> It follows, then, that ChatGPT should obviously have an advertising model. This isn’t just a function of needing to make money: advertising would make ChatGPT a better product. It would have more users using it more, providing more feedback; capturing purchase signals — not from affiliate links, but from personalized ads — would create a richer understanding of individual users, enabling better responses.

But there is a more trivial way that it could be "better" with ads: they could give free users more quota (and/or better models), since there's some income from them.

The idea of ChatGPT's own output being modified to sell products sounds awful to me, but placing ads alongside that are not relevant to the current chat sounds like an Ok compromise to me for free users. That's what Gmail does and most people here on HN seem to use it.


Is this why everyone only seems to know the first half of Dario's quote? The guy in that video is commenting on a 40 second clip from twitter, not the original interview.

I posted a link and transcription of the rest of his "three to six months" quote here: https://news.ycombinator.com/item?id=46126784


Thank you.

Why do people always stop this quote at the breath? The rest of it says that he still thinks they need tech employees.

> .... and in 12 months, we might be in a world where the ai is writing essentially all of the code. But the programmer still needs to specify what are the conditions of what you're doing. What is the overall design decision. How we collaborate with other code that has been written. How do we have some common sense with whether this is a secure design or an insecure design. So as long as there are these small pieces that a programmer has to do, then I think human productivity will actually be enhanced

(He then said it would continue improving, but this was not in the 12 month prediction.)

Source interview: https://www.youtube.com/live/esCSpbDPJik?si=kYt9oSD5bZxNE-Mn


Some places resist this because it causes a "rich get richer" effect in popularity. But it's admittedly convenient.


My initial assumption is that this is more about Photomator than Pixelmator (ie. their Lightroom alternative rather than their Photoshop alternative).

Photomator has shown that you can add a lot of professional-level editing control to an Apple-Photos-like interface without making it difficult to use.

Their ML team also seems quite good — for instance, their spot/object removal tool was often more reliable for me than the one in Lightroom, despite being from a far smaller team than Adobe.

(I also feel that Photoshop has reduced in cultural significance in recent years, and that Lightroom is the more significant tool going forward, but that could reflect my own bubble)


I think that's why the GP said "on the same machine" — I read the article as comparing to the typical DB that is located on a separate machine in the data center, accessed over a network, as is currently the common setup.

But if we're considering running SQLite, the apt comparison would be against other DBs running on the same machine, because we've already decided local data storage is acceptable.

I assume Postgres/MySQL/etc would have higher latency than SQLite due to IPC overhead — but how significant is it?


(replying to myself)

I ran a quick, non-scientific Python script on my Macbook M2 using local Postgres and SQLite installs with no tuning.

It does 200 simple selects, after inserting random data, to match the article's "200 queries to build one web page".

(We're mostly worrying about real-world latency here, so I don't think the exact details of the simple queries matter. I ran it a few times to check that the values don't change drastically between runs.)

    SQLite:
    Query Times:
      Min: 0.007 ms
      Max: 0.031 ms
      Mean: 0.007 ms
      Median: 0.007 ms
    Total for 200 queries: 1.126 ms

    PostgreSQL:
    Query Times:
      Min: 0.023 ms
      Max: 0.170 ms
      Mean: 0.028 ms
      Median: 0.026 ms
    Total for 200 queries: 4.361 ms
(Again, this is very quick and non-optimised, so I wouldn't take the measured differences too seriously)

I've seen typical individual query latency of 4ms in the past when running DBs on separate machines, hence 200 queries would add almost an entire second to the page load.

But this is 4.3ms total, which sounds reasonable enough for most systems I've built.

A single non-trivial query required for page load could add more time than this entire latency overhead. So I'd probably pick DBs based on factors other than latency in most cases.


Deepmind's recent model is trained with Lean. It scored a silver olympiad medal (and only one point away from gold).

> AlphaProof is a system that trains itself to prove mathematical statements in the formal language Lean. It couples a pre-trained language model with the AlphaZero reinforcement learning algorithm, which previously taught itself how to master the games of chess, shogi and Go

https://deepmind.google/discover/blog/ai-solves-imo-problems...


And yet the data is ephemeral, so in many cases could be done faster without the "real" block storage guarantees.

Github Actions is pay-per-minute-used, unfortunately, so they may have a negative incentive to speed things up. Unless people become frustrated enough to switch to a non-bundled provider.


Not all the data is ephemeral - stuff like node_modules needs to he cached and if you’re suggesting to use tmpfs then it’s like 50x more cost than fully tricked out cloud ssd which is already ridiculously expensive


Yes, it's not arbitrary at all — they're only offering it on devices with at least 8GB of memory.

The iPhone 15 Pros were the first iPhones with 8GB. All M1+ Macs/iPads have at least 8GB of ram.

LLMs are very memory hungry, so frankly I'm a little surprised they support such low memory requirements (especially knowing that the system is running other tasks, not just ML). Microsoft's Copilot+ program has a 16GB minimum.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: