Hacker Newsnew | past | comments | ask | show | jobs | submit | tmalsburg2's commentslogin

Sure, the market chose Markdown, but this simply led me to the conclusion that the market isn’t worth following. Of course the mismatch creates some friction, but the benefits of org-mode, for me personally, easily outweigh that.


Sounds smart but it’s abusing the semantics of “refine” and is therefore ultimately vacuous.


I think it is really just the difference between chemically refining something and electrically refining something.

Raw AC comes in, then gets stepped down, filtered, converted into DC rails, gated, timed, and pulsed. That’s already an industrial refinement process. The "crude" incoming power is shaped into the precise, stable forms that CPUs, GPUs, RAM, storage, and networking can actually use.

Then those stable voltages get flipped billions of times per second into ordered states, which become instructions, models, inferences, and other high-value "product."

It sure seems like series of processes for refining something.


What problem is this trying to solve and does it actually succeed at solving it? I‘m struggling to see the appeal given that the JS still needs to model the internal structure of the template in order to fill the slots.


The Shadow DOM can auto-fill the slots in the case where a web component has slot fillers in the main DOM. You still need JS to invoke/create the shadow DOM, but in that case your JS might be minimal and not need to model the interior structure.

But the big problem that template tries to solve is building DOM fragments that are parsed but not "live" in the open documment. Before the template tag there was no good way to do that other than JS and createElement/createElementNS, and that has always been slower than the Browser's well optimized HTML parser.

Also, the slot tag does solve a minor problem of being the first tag whose out-of-the-box (browser CSS) behavior is `display: contents;`. It's obviously not a huge lift from the CSS one-liner, but there are still some uses for it even outside of templates.


Also inflation.


Given how immigration and trade controls have worked over this century.

Anybody who goes up there first will put restrictions on who else can arrive and thrive.


Yeah, that's confusing, but the HuggingFace page says it has 9.78 B parameters.

https://huggingface.co/ibm-ai-platform/Bamba-9B-fp8


Isn't this exactly the point of this model? No need to memorize everything (which makes transfomers expensive), just keep the relevant info. SSM are essentially recurrent models.


You can't always know what will be "relevant info" in the future. Even humans can't do this but whenever that's an issue, we just go back and re-read, re-watch etc.

None of these modern recurrent architecture have a way to do this.


How often do you go back an rewatch earlier parts of a movie? I hardly ever do this. In the cinema, theater, or when listening to the radio it’s simply impossible and it still works.


You are mentioning avenues that are largely for entertainment. Sure you might not go back to re-attend for those. If you will be tested or are doing research, are you really looking at a large source once ?


It’s do easy to come up with serious non-entertainment examples, I‘m sure you don’t need my help finding them.


OP is talking about information entropy. Nature isn't relevant there.


Surely laws of nature are still relevant since they (presumably) establish limits on how closely a system can be measured and which physical interactions can be simulated by computers (and how accurately).


What would even be the right hands in this case? Seems like almost any hands would be the wrong hands.


There basically aren't any. Almost any entity that would want to buy the company should by definition not be allowed to do so.

The ideal solution would be for all data to be deleted before a sale. If anyone would want to buy 23andMe without the collected data then that's perfectly fine.


My ideal would be customers given the option to have it delete sold and profit split with between themselves and 23&me, or anonymised and donated to the human genome project.


The tech report doesn't say on which languages it was trained. The huggingface page says 140 languages but has no list, as far as I can see. :/


HN removed the first word of the title (“How”). I tried to delete the submission, but it didn’t work. @dang, is it possible that there’s a bug here?


There's default behaviour but you probably shouldn't make up titles for articles, like the site docs ask you not to.


Yes, HN's dehower is buggy. Fixed now. Sorry!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: