Hacker Newsnew | past | comments | ask | show | jobs | submit | ehsankia's commentslogin

So you're basically saying that you can spend as much money to get a knife that will cut as well but requires regular work put into it, whereas this doesn't? I think that's the whole pitch here...


Kinda annoying that the article doesn't really answer the core question, which is how much time was saved in the start up time. It does give a 0.05ms per tooltip figure, so I guess multiplied by 38000 gives ~2s saved, which is not too bad.


"Together, these two problems can result in the editor spending an extremely long time just creating unused tooltips. In a debug build of the engine, creating all of these tooltips resulted in 2-5 seconds of startup time. In comparison development builds were faster, taking just under a second."


1. It might not be the best across all metrics today, but it definitely was a few years ago.

2. While it's true that other browsers like Firefox have been catching up to Chrome in speed, it's still true that Chrome help lead the way and if not for it, the web would've likely been far slower today.

3. There has been an explosion in other browsers in the past few years, but admittedly they're all chromium-based, so even that wouldn't have been possible without Chrome


Safari has been better for going on 5 years now, funny thing is it was worse for long enough that it seems everyone, even to this day, refuses to believe it.

Faster in basically every dimension. Supporting way more than FF in terms of specs. Way more efficient on battery. Better feeling scroll, better UI.


Any source for that?

https://www.browserating.com/ doesn't put it in top5 on any non-ios platform?


Chrome caught up in the last year or so, but also speedometer is also fairly arbitrary. Open/close, tab open/close, tab switch, scroll, initial load, resizing all still far better. Actual app performance depends on the app but for a few years Safari was clearly better.


So your source is your personal opinion. Got it.


100% objective, in fact among better web developers this has been common knowledge. There were plenty of articles, side by sides and benchmarks over the last years showing it.


Agreed. Only thing lacking is the multi Google account/profile support


Profile came to Safari in iOS 17. https://support.apple.com/en-au/guide/iphone/iphd27a9ff22/17...

Similarly with Safari 17 on macOS.


It has gotten absolutely out of control. I will be reading an article about a new game, and the article won't even have a link to the store page to buy the game...


Which store page should they be linking to? Inevitably what you’re asking for is how we’ve ended up with sites spinning off thousands of articles stuffed full of affiliate links.


Just link to a few? There's a finite set of stores a game is usually on

On PC, it'll be Steam, GOG, maybe Humble. Then on consoles you have Xbox, Playstation and Nintendo. If you wanna put affiliate link, go for it. It's better than no link at all.

These articles already bait my click for ads by never putting the name of the game in the title anyways. At least let me get to the game and buy it.


Before a model is announced, they use codenames on the arenas. If you look online, you can see people posting about new secret models and people trying to guess whose model it is.


What are "the arenas"?


Blind rating battlegrounds, one is https://lmarena.ai/ (first google result)


I don't quite get what this is? I asked the AI on the site "What is imarena.ai?" and it just gave some hallucinated answer that made no sense.


People vote on the performance of AI, generating ranking boards.


Ah, that was the missing piece of information! Thanks!


Analogies are just that, they are meant to put things in perspective. Obviously the LLM doesn't have "senses" in the human way, and it doesn't "see" words, but the point is that the LLM perceives (or whatever other word you want to use here that is less anthropomorphic) the word as a single indivisible thing (a token).

In more machine learning terms, it isn't trained to autocomplete answers based on individual letters in the prompt. What we see as the 9 letters "blueberry", it "sees" as an vector of weights.

> Illusions don't fool our intelligence, they fool our senses

That's exactly why this is a good analogy here. The blueberry question isn't fooling the LLMs intelligence either, it's fooling its ability to know what that "token" (vector of weights) is made out of.

A different analogy could be, imagine a being that had a sense that you "see" magnetic lines, and they showed you an object and asked you where the north pole was. You, not having this "sense", could try to guess based on past knowledge of said object, but it would just be a guess. You can't "see" those magnetic lines the way that being can.


> Obviously the LLM doesn't have "senses" in the human way, and it doesn't "see" words

> A different analogy could be, imagine a being that had a sense that you "see" magnetic lines, and they showed you an object and asked you

If my grandmother had wheels she would have been a bicycle.

At some point to hold the analogy, your mind must perform so many contortions that it defeats the purpose of the analogy itself.


> If my grandmother had wheels she would have been a bicycle.

That's irrelevant here, that was someone trying to convert one dish into another dish.

> your mind must perform so many contortions that it defeats the purpose

I disagree, what contortions? The only argument you've provided is that "LLMs don't have senses". Well yes, that's the whole point of an analogy. I still hold that the way LLMs interpret tokens is analogous to a "sense".


> the LLM perceives [...] the word as a single indivisible thing (a token).

Two actually, "blue" and "berry". https://platform.openai.com/tokenizer

"b l u e b e r r y" is 9 tokens though, and it still failed miserably.


And I don't think it makes sense for the SSO provider to leak a list of all your accounts to the website either. That being said, I think the SSO provider could maybe track that information maybe?


Maybe just my biased brain, but the title made it sound like they were half a million under, not over. In some way, this is how 1000 piece jigsaw puzzles will never be exactly 1000 pieces. As long as there's at least 1000, I think most people are fine, especially as an art piece. And of course as mentioned, there's the possibility that there's filler inside.

It would've been much worse if it was under though.


The ones that are 25 pieces x 40 pieces are really 1000 pieces. But some puzzles are 27x38 or other more square form factors.


25x40 is rarely used because non square piece give a lot more info about placement and a 25 X 40 rectangle is almost twice as wide as it is tall. It’s rarely the right kind of aspect ratio.


> In some way, this is how 1000 piece jigsaw puzzles will never be exactly 1000 pieces.

What??


Yeah, most jigsaw puzzles do not have precisely the number of pieces advertised. Here's an amusing video (by the channel Stand-up Maths) that does a deep dive into it. https://www.youtube.com/watch?v=vXWvptwoCl8

TLDR if you don't have a half-hour: puzzles are usually cut with the pieces on grids, and not all aspect ratios are conducive to that with all piece counts. Like, you might want a 2:3 shaped puzzle with 500 pieces, and 18x28=504 is close enough.


Not extreme at all, A lot of people use the cheapest smallest VPS for their hobby work. I know I do (albeit not AWS). Thanks for sharing, hope they improve the automatic detection there.


They put anything that makes sense. I don't know if including random movies makes sense.

They got Youtube Premium which is like 15$. 30TB of storage, a bit excessive and no equivalent but 20TB is around 100$ a month.


I’m not seeing the relevance of YouTube and the One services to this at all.

I get that Big Tech loves to try to pull you into their orbit whenever you use one of their services, but this risks alienating customers who won’t use those unrelated services and may begrudge Google making them pay for them.


It's trying to normalise it, make it just another part of your Google experience, alongside (and integrated with) your other Google tools. (Though that's weakened a bit by calling it 'AI Pro/Ultra' imo.)


Idk if anyone will see these offerings more than just an added bonus, especially when you compare to OAI which asks for more for only the AI models.


I imagine this could be seen as an anticompetitive lever. Whereby Google is using its dominance in one field to reduce competition in another, adding it here is a way to normalise that addition for when massmarket-priced plans become available.

Tucking it towards the end of the list doesn't change that.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: