Hacker Newsnew | past | comments | ask | show | jobs | submit | buffalobuffalo's commentslogin

Blockchain only has 2 legitimate uses (from an economic standpoint) as far as I can tell.

1) Bitcoin figured out how to create artificial scarcity, and got enough buy-in that the scarcity actually became valuable.

2)Some privacy coins serve an actual economic niche for illegal activity.

Then there's a long list of snake oil uses, and competition with payment providers doesn't even crack the top 20 of those. Modern day tulip mania.


Sounds like LLMs. The legitimate uses are:

1) Langauge tasks.

2) ...

I can't even think of what #2 is. If the technology gets better at writing code perhaps it can start to do other things by way of writing software to do it, but then you effectively have AGI, so...


I don't know if a bunch of sloppy jQuery modules were ever really a viable option for an SPA. People tried to do it, sure, but I'd say the SPA era really started with backbone.js


ExtJS/Sencha was quite powerful and complete. I’ve built tons of SPAs with it in the late 00s.


Wow, I remember Sencha! It’s been a while since I’d heard that name.


I mostly remember doing $(document).ready blocks in php templates :)


I wrote my first SPA, a knowledge graph editor, using GWT (Google Web Toolkit) which compiled a dialect of Java to JavaScript circa 2006 or so.


It's still the best RDB schema creation/migration tool I know of. It has a crazy number of plugins to handle all sorts of unusual field types and indexing. I usually add django to any project I'm doing that involves an RDB just to handle migrations. As long as you avoid any runtime use of the ORM it's golden.


I kinda consider it a P!=nP type thing. If I need to write a simple function, it will almost always take me more time to implement it than it will to verify if an implementation of it suits my needs. There are exceptions, but overall when coding with LLMs this seems to hold true. Asking the LLM to write the function then checking it's work is a time saver.


I think this perspective is kinda key. Shifting attention towards more and better ways to verify code can probably lead to improved quality instead of degraded.


I see it as basically Cunningham's Law. It's easier to see the LLM's attempt a solution and how it's wrong than to write a perfectly correct solution first time.


Came here to post this it is precisely right.


Yeah i was wondering about that too. Even small cap PoW chains have dedicated mining hardware that is orders of magnitude faster than a GPU. I guess in theory it could work if you cobbled together enough hacked AWS accounts, but the scale required to make any sort of real profit would be gigantic. It just doesn't seem worthwhile.


Exactly. The problem is volume on exchanges to unload what you've mined. Some of these tokens only have a few thousand a day and any selling risks dumping the entire market. If you can steal the compute, sure, that is one thing, but it is very risky for not a huge payout.


Yeah, the brain is a sparse MoE. There is a lot of overlap in the hardware of the "language brain" and the "math brain". That being said, I can discuss software concepts in a foreign language, but struggle with basic arithmetic in anything but English. So while the hardware might be the same, the virtualization layer that sits on top might have some kind of compartmentalization.


I think the fundamental problem is that next.js is trying to do two things at once. It wants to a) Be fast to load for content that is sensitive to load speeds (SEO content, landing pages, social media sharable content, etc). It also wants to support complex client side logic (single page app, navigation, state store, etc). Doing those two things at the same time is really hard. It is also, in my experience, completely unnecessary.

Use minimal html/css with server side rendering (and maybe a CDN/edge computing) for stuff that needs to load fast. Use react/vue/whatever heavy framework for stuff that needs complex functionality. If you keep them separate, it's all very easy. If you combine them, it becomes really difficult to reason about.


This is my approach. My website tyleo.com is just a bunch of CSS/HTML classic webpage stuff. If a page needs a small amount of JS I just bundle it ad-hoc. More complex pages get the full React/SPA treatment but it doesn’t mean the whole website needs to be that way.

As an aside, I reuse code by using React as the template engine for HTML. Each page essentially has a toggle whether to ship it in dynamic mode or static mode which includes the full JS bundles or nothing.


Sveltekit excels at this out of the box. And it’s simpler/easier than vanilla, let alone anything React-based.


When I was first getting into Deep Learning, learning the proof of the universal approximation theorem helped a lot. Once you understand why neural networks are able to approximate functions, it makes everything built on top of them much easier to understand.


I've used NEAT a few for a few different things. The main upside of it is that it requires a lot less hyper-parameter tuning than modern reinforcement learning options. But that's really the only advantage. It really only works on a subset of reinforcement learning tasks (online episodic). Also, it is a very inefficient search of the solution space as compared to modern options like PPO. It also only works on problems with fairly low dimensional inputs/outputs.

That being said, it's elegant and easy to reason about. And it's a nice intro into reinforcement learning. So definitely worth learning.


This criticism is all the more poignant given that it comes from Nabokov. He is one of the few authors for whose works the Russian and English versions are almost equivalent; he was bilingual and did the translation himself.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: