Hacker News new | past | comments | ask | show | jobs | submit | gnatolf's comments login

Some shoes (Meindl Boots) actually go up in price for larger sizes (>46 EUR, 13 US I think) due to the additional cost of material.


Not a hint of adaptation to mobile (chrome/android).


I pushed an update. Please give it another go, sorry about that!


My reaction was just the same, but look at gruber:

https://daringfireball.net/linked/2025/05/21/sam-and-jony-io

and gruber is stirring up drama about why his links don't do well on hn.


> It conveys grand ambition, but without pretension.

I wonder how it is possible that a human watches that video and will think they don't convey pretension


Indeed it may as well be an advert for pretension itself.


As disrespectful as it is wrong and overly generalized.


> As disrespectful as it is wrong and overly generalized

How? Since the fall of the USSR, what major technology took a lab-to-living-room path where the “lab” bit wasn’t American?


Lots of things: https://en.m.wikipedia.org/wiki/Timeline_of_historic_inventi...

Still, the statement that a large (but not all) portion of them were American inventions is probably defensible.

On the other hand, the American economy was one of the few major ones unravaged by WWII conflict.


I’m talking about paradigm-shifting conventions. Microprocessors. LLMs. The Internet. Lithium-ion batteries. Our institutions reliably did the basic research to bias us (and the USSR) towards discovering their underlying phenomena and then maturing that into technology. Today, the only real player in that space is China.


> paradigm-shifting conventions

   - Basic oxygen steelmaking
   - Float glass
   - Orbital satellite
   - High speed railway
   - Public key cryptography
   - Orbital space station
   - DNA/RNA/sequencing
   - Self-driving car
   - Cellular phone service
   - CD-ROM
   - Direct satellite television
   - Laptop
   - (Also some other stuff after 1985...)
Take many and don't be a dick about most.


JumpCrisscross said "half century"; while I would doubt the "practically every" part of their claim and like you would instead reduce that to somewhere between "lots" and "most", your list has a lot of stuff from before 1975.

You can definitely have CD-ROMs and I think it's fair to give you DNA sequencing (though that's not really one single thing), but everything else is questionable or just not correct when I look up the history of those exact things on the same source you link to above — their own wikipedia pages.

As for "questionable":

- Cellular phone service: depends what you count as such, given people have been working on the predecessors to what's now called 1G since about the invention of the radio; but 1G itself would be Japan in 1979, so if that's your cut-off-point, then you could have it I guess.

- Laptop: only if you're counting the Portal R2E CCMC, because the first clamshell laptop was the Grid Compass. While the founder of the company and designer of the laptop were British, they were doing the work in the USA.

As for "just no":

- Basic oxygen steelmaking: 1856 for the first demonstration in the UK, 1940s for industrialisation in Austria

- Float glass: 1950s

- Orbital satellite: 1957

- High speed railway: depends what you mean by "high speed", but you could easily claim 1938 or several different points in the 1950s

- Public key cryptography: 1973 in secret in the UK, but they were classified for ages and only the US invention of the same a few years later made it commercially available, so the "lab" part in the lab-to-home path was definitely American.

- Orbital space station: 1971 (I'd count this as a paradigm shift even if there's no living rooms anywhere in sight here)

- Direct satellite television: why did you put this in your list? Not only is the first ever satellite TV broadcast the USA's Telstar in 1962, the first direct broadcast satellite was ATS-6 in 1974: https://en.wikipedia.org/wiki/ATS-6

- Self-driving car: Even the SOTA in self driving cars is not "paradigm-shifting", so it's not really been invented yet at the level required to be on this list


A lot of the comments on HN lately are rightfully focused on this formative brain exercise that leads to intuition and conceptual understanding that is chiselled away by the shortcuts that GenAI provides. I wonder where the gain of productivity from GenAI and the drop off in 'our brain'-quality intersects.


It's actually not that different than talking with employees, however, the LLMs still have very significant shortfalls (which you know about after using them a lot.)

If a manager doesn't know anything about what their employees are working on, they are basically fucked. That much holds up with LLMs. The simple stuff mostly works, but the complex stuff isn't going to pan out for you, and it will take a while to figure out that's the direction you went in.


One comparison is with Stack Overflow (SO). Given a task, there are usually multiple answers. The question may not even be relevant; often, multiple question pages must be compared.

The best answer is the one that fits the aesthetics of my approach--one that didn't exist before (there was only the problem before), but the answer is simple, straightforward, or adaptable.

Having multiple answers is good because different minds evaluated the question. It is a buffet of alternatives, starting from others' first principles, mistakes, and experience. Some are rejected outright from some tacit taste organ. Others become long-lived browser tabs, a promise to read carefully someday (never).

All this is void if it turns out using SO is similarly degenerative in the same way, though.


We should probably require AI to always be able to explain it's conclusions.

That way we can quickly assimilate knowledge from the AI and theoretically always have at least as much knowledge as the AI.

I suppose it also means that we can verify that the AI is not lying to us.


Unfortunately we don't have that kind of AI. We only have the useless kind.


The churn of staying on top of this means to me that we'll also chew through experts of specific times much faster. Gone are the day of established, trusted top performers, as every other week somebody creates a newer, better way of doing things. Everybody is going to drop off the hot tech at some point. Very exhausting.


The answer is simple: have your AI stay on top of this for you.

Mostly joke, but also not joke: https://news.smol.ai/


I couldn't agree more, this 'polished' style the finished comment comes in is super boring to read. It's hard to put the finger on it, but overall flow is just too... Samesame? I guess it's perfectly _expected_ to be predictable to read ;)


And then one of the iterations was asking additional ways llms could be used and then adding some of those as content which seems odd but plausibly helpful brainstorming. Just the phrasing of the original question makes it sound like things the user isn't actually doing but wants in their comment if that makes sense.

Thanks for the example chat it was a valuable learning for me!


Mostly just SNR issues.


Not to take away from that fact, but the share of train-moved freight (over time) is more interesting. It probably highlights the complications of scaling train networks easily.


I did the same. In short examples like the ones used in the article, it's easy to reason about the states and transitions. But in a much larger codebase, it gets so much harder to even discover available transitions if one is leaning too much on the from/into implementations. Nice descriptive function names go a long way in terms of ergonomic coding.


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: