Hacker Newsnew | past | comments | ask | show | jobs | submit | cameronh90's commentslogin

Ignoring that those numbers aren't directly comparable, it did make me wonder, if I had to give up either "AI" or chocolate tomorrow, which would I pick?

Even as an enormous chocolate lover (in all three senses) who eats chocolate several times a week, I'd probably choose AI instead.

OpenAI has alternatives, but also I do spend more money on OpenAI than I do on chocolate currently.


I am just trying to help you write better. Your writing says "if I had to give up either AI or chocolate [...] I would probably choose AI". However, your language and intent seems to be that you would give up chocolate.

It’s a bit of a weird comparison, AI vs a luxury sweet.

Maybe instead of the chocolate market, look at the global washing machine market of $65 billion.

I’d rather give up both AI and chocolate than my washing machine.


If you really wanted to know you could stop eating chocolate or stop using ai and see if you break. Or do both at different times and see how long you last without one or the other.

Rather, the problem more often I see with junior devs is pulling in a dozen dependencies when writing a single function would have done the job.

Indeed, part of becoming a senior developer is learning why you should avoid left-pad but accept date-fns.

We’re still in the early stages of operationalising LLMs. This is like mobile apps in 2010 or SPA web dev in 2014. People are throwing a lot of stuff at the wall and there’s going be a ton of churn and chaos before we figure out how to use it and it settles down a bit. I used to joke that I didn’t like taking vacations because the entire front end stack will have been chucked out and replaced with something new by the time I get back, but it’s pretty stable now.

Also I find it odd you’d characterise the current LLM progress as somehow being below where we hoped it would be. A few years back, people would have said you were absolutely nuts if you’d have predicted how good these models would become. Very few people (apart from those trying to sell you something) were exclaiming we’d be imminently entering a world where you enter an idea and out comes a complex solution without any further guidance or refining. When the AI can do that, we can just tell it to improve itself in a loop and AGI is just some GPU cycles away. Most people still expect - and hope - that’s a little way off yet.

That doesn’t mean the relative cost of abstracting and inlining hasn’t changed dramatically or that these tools aren’t incredibly useful when you figure out how to hold them.

Or you could just do what most people always do and wait for the trailblazers to either get burnt or figure out what works, and then jump on the bandwagon when it stabilises - but accept that when it does stabilise, you’ll be a few years behind those who have been picking shrapnel out of their hands for the last few years.


And in Germany - it seems - nothing works and nothing is possible?

That's true in general, but people do use these hobbyist boards as an alternative to a manufacturer dev board when prototyping an actual product.

It's reasonably common in the home automation space. A fair few low volume (but still commercial nevertheless) products are built around ESP32 chips now because they started with ESPHome or NodeMCU. The biggest energy provider in the UK (Octopus) even have a smart meter interface built on the ESP32.


There are some early tyre and brake dust collection systems which might help, but that won't do much for the road dust.

I've been wondering whether, theoretically, if self driving cars become widely usable and deployed in cities, will they be able to safely operate with harder tyre compounds and harder road surfaces that shed less but don't grip as well?

If nothing else, less aggressive driving should lead to less shedding.


Given how predictable this response was, how sure are you that you're any better?


The OLPC project clearly didn’t achieve its aims, but how would they have known that without trying?

More recently, the impact of smart phones on the developing world has been transformational, suggesting some of the ideas behind OLPC may have been good, but the specific implementation lacking. Thanks to smart phones, developing communities now have access to media in global languages, online education, finance, communication, markets (without having to travel for miles), disaster recovery, health resources and much more.

You can even now see rural villages themselves prioritise phone infrastructure over many things that on the surface seem more important - such as by fixing the phone charger before they fix the plumbing!


I don't have any unique insight on it, but I think herpesviruses are probably worse than we realise.

They hide from the immune system inside nerves or other immune cells, and seem to have a lot of weird associations with other issues over our lifetime, particularly neurological and immune problems.


I think I hear as many people calling it ChatGBT or ChatGTP as ChatGPT.


"Oh no it's GPT, a Generative Pretrained Transformer shaped into chat responses."


None of which, when searched, will lead the user to Claude, Qwen, et al.

Just OpenAI and ChatGPT.

So what’s your point?


I have never had to rote learn anything mental since at least mid childhood. I can't remember before then.

If it's something I need to do regularly, I eventually learn it through natural repetition while working towards the high level goal I was actually trying to achieve. Derivatives were like this for me. I still don't fully know the periodic table though, because it doesn't really come up in my life; if it's not something I need to do regularly, I just don't learn it.

My guess is this doesn't work for everything (or for everyone), and it probably depends on the learning curve you experience. If there are cliff edges in the curve that are not aligned with useful or enjoyable output, dedicated practice of some sort is probably needed to overcome them, which may take the form of rote learning, or, maybe better, spaced repetition or quizzing or similar. However at least for me, I've not encountered anything like that.

If I was to speculate why rote learning doesn't work well for me, I don't seem to experience a feeling of reward during it, and it seems like my ability to learn is really heavily tied somehow to that feeling. I learn far more quickly if it's a problem I've been struggling with for a while that I solve, or it's a problem I really wanted to solve, as the reward feeling is much higher.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: