Hacker Newsnew | past | comments | ask | show | jobs | submit | netr0ute's commentslogin

> G++ picks space over time

By definition, that's zero-overhead because Ultrassembler doesn't care about space.


Okay, than a traditional setjmp/longjmp implementation is zero-overhead because I don't care about space or time!


I thought about hashing, but found that hashing would be enormously slow to compute compared to a perfectly crafted tree.


But did you think about using a perfect hash function and table? Based on my prior research, it seems like they are almost universally faster on small strings than trees and tries due to lower cache miss rates.


Ditto. Perfect hashing strings smaller than 8 bytes has been the fastest lookup method in my experience.


Problem is, there are a lot of RISC-V instruction way longer than that (like th.vslide1down.vx) so hashing is going to be slow.


You could copy the instruction to a 16 byte sized buffer and hash the one/two int64s. Looking at the code sample in the article, there wasn't a single instruction longer than 5 characters, and I suspect that in general instructions with short names are more common than those with long names.

This last fact might actually support the current model, as it grows linearly-ish in the size of the instruction, instead of being constant like hash.


Note th.vslide1down.vx is a T-Head instruction, a vendor custom extension.

It is not part of RISC-V, nor supported by any CPUs outside of that vendors' own.


Is there a handy list of all RISC-V instructions?


Hi everyone, I'm the author of this article.

Feel free to ask me any questions to break the radio silence!


Nice work and good writeup. I think most of that is very sound practice.

The codegen switch with the offsets is in everything, first time I saw it was in the Rhino JS bytecode compiler in maybe 2006, written it a dozen times since. Still clever you worked it out from first principles.

There are some modern C++ libraries that do frightening things with SIMD that might give your bytestring stuff a lift on modern stupid-wide high mispredict penalty stuff. Anything by lemire, stringzilla, take a look at zpp_bits for inspiration about theoretical minimum data structure pack/unpack.

But I think you got damn close to what can be done, niiicccee work.


FWIW, this is basically an implementation of perfect hashing, and there's a myriad of different strategies. Sometimes “switch on length + well-chosen characters” are good, sometimes you can do better (e.g. just looking up in a table instead of a long if chain).

The “value speculation” thing looks completely weird to me, especially with the “volatile” that doesn't do anything at all (volatile is generally a pointer qualifier in C++). If it works, I'm not really convinced it works for the reason the author thinks it works (especially since it refers to an article talking about a CPU from the relative stone age).


Overall, this is a fantastic dive into some of RISC-V's architecture and how to use it. But I do have some comments:

> However, in Chata's case, it needs to access a RISC-V assembler from within its C++ code. The alternative is to use some ugly C function like system() to run external software as if it were a human or script running a command in a terminal.

Have you tried LLVM's C++ API [0]?

To be fair, I do think there's merit in writing your own assembler with your own API. But you don't necessarily have to.

I'm not likely to go back to assembly unless my employer needs that extra level of optimization. But if/when I do, and the target platform is RISC-V, then I'll definitely consider Ultraseembler.

> It's not clear when exactly exceptions are slow. I had to do some research here.

There are plenty of cppcon presentations [1] about exceptions, performance, caveats, blah blah. There's also other C++ conferences that have similar presentations (or even, almost identical presentations because the presenters go to multiple conferences), though I don't have a link handy because I pretty much only attend cppcon.

[0]: https://stackoverflow.com/questions/10675661/what-exactly-is...

[1]: https://www.youtube.com/results?search_query=cppcon+exceptio...


> LLVM's C++ API

I think I read something about this but couldn't figure out how to use it because the documentation is horrible. So, I found it easier to implement my own, and as it turns out, there are a few HORRIBLE bugs in the LLVM assembler (from cross reference testing) probably because nobody is using the C++ API.

> There are plenty of cppcon presentations [1] about exceptions, performance, caveats, blah blah.

I don't have enough time to watch these kinds of presentations.


A specific presentation I'd point to is Khalil Estell's presentation on reducing exception code size on embedded platforms at https://www.youtube.com/watch?v=bY2FlayomlE

But honestly you'd get vast majority of the benefit just by skimming through the slides at https://github.com/CppCon/CppCon2024/blob/main/Presentations...

With a couple of symbols you define yourself a lot of the associated g++ code size is sharply reduced while still allowing exceptions to work. (Slide 60 on)


> I think I read something about this but couldn't figure out how to use it because the documentation is horrible.

Fair enough.

> So, I found it easier to implement my own, and as it turns out, there are a few HORRIBLE bugs in the LLVM assembler (from cross reference testing)

Interesting claim, do you have any examples?


> I don't have enough time to watch these kinds of presentations.

Then let me pick and share some of my favorites that I found enlightening, and summarize with some information that I found useful.

By far, the most useful one is Khalil Estell's presentation last year [0]. It's a fairly face paced but relatively deep dive into exception mechanics. At the end, he advocates for a new tool that would audit a program to determine what exceptions could be thrown. I think that's a flipping fantastic idea for a tool. Unfortunately I haven't seen any progress toward it -- if someone here knows where his tool is, or a similar tool, please reply! I did send him an email a few months ago inquiring about it, but haven't received a reply. Nonetheless, the whole presentation was excellent in my opinion. I did see that he had another related presentation at ACCU this year [4] with a topic of "C++ Exceptions are Code Compression" (which I totally can believe -- I've seen it myself in binary sizes), but I haven't seen his presentation yet. I'll watch it later today.

Just about anything from Herb Sutter is good. I don't like that he works for Microsoft, but he does great stuff for C++, including the old Guru of the Week series [1]. In particular, his 2019 presentation [2] describes different error handling techniques, some difficulties and pitfalls in combining libraries with different error handling techniques, and leads up to explaining why std::expected came about. He does pontificate a lot though, so the presentation is fairly high level and slow paced.

Dave Watson's 2017 presentation [3] dives into a few different implementations of stack unwinding. It's good to understand how different compilers implement exceptions with low- or zero-cost overhead and what that "overhead" is really measuring.

So, there's about a half of a day of presentations to watch here. I hope that's not too much for you.

[0]: https://www.youtube.com/watch?v=bY2FlayomlE

[1]: https://herbsutter.com/gotw/

[2]: https://www.youtube.com/watch?v=ARYP83yNAWk

[3]: https://www.youtube.com/watch?v=_Ivd3qzgT7U

[4]: https://www.youtube.com/watch?v=LorcxyJ9zr4


Update: it looks like link [4] is just a rehash of his talk from last year's cppcon [0].

[0]: https://www.youtube.com/watch?v=bY2FlayomlE

[4]: https://www.youtube.com/watch?v=LorcxyJ9zr4


isn't your MemoryBank already somewhere in std::pmr?

If I'm honest, I've never looked into pmr, but I always thought that that's where std has arena allocators and stuff

https://en.cppreference.com/w/cpp/header/memory_resource.htm...


What's the difference between a Programming Furu and a Programming Guru? Is there a joke I'm missing?


Furus are "fake gurus." It comes from the Fintwit space where "furus" share their +1000% option trades as if they're geniuses in order to get you to sign up for their expensive Substack.


You might look into using memory mapped IO for reading input and writing your output files. This can save some memory allocations and file read and write times. I did this with a project where I got more than 10x speed up. For many cases file IO is going to be your bottleneck.


mmap-based I/O still needs to go through the kernel, including memory allocation (in the page cache) and all. If you've got 10x speedup from mmap, it is usually because your explicit I/O was very inefficient; there are situations where mmap is useful, but it's rarely a high-performance strategy, as it's really hard for it to guess what your intended I/O patterns are just from the page faults it's seeing.


Windows uses memory mapped IO for loading all executable processes because it allows you to start executing a process after loading a few pages even if the exe is megabytes. You can use the same to reduce latency for starting to assemble data before the rest of the file loads, the rest can be loaded using more efficienct asynchronous mechanisms. Using for output also means your process doesnt waits on flushes that is also async. And in memory constrained environments the OS doesn’t have to write your data to swap, it can just reload it from the meeting mapped file.


Linux also uses mmap for running executables. But explicit I/O does not mean you have to start off by a gigabyte-long read().


More detailed explanation from ChatGPT. As quick estimate you could achieve a >2x speed up using memory mapped files for a typical assembler workload.

https://chatgpt.com/share/68b5e0db-a6d0-8005-9101-d326d2af0a...


Why would anyone be interested in arguing against a confused AI?


I was trying to provide a more detailed explanation without typing a lot. I studied this problem a lot as PE at vmware.


https://distantprovince.by/posts/its-rude-to-show-ai-output-...

In any case, if you really believe mmap is great for an assembler, then sure, go ahead. But it's not.


I implemented an assembler as part of VMware thinapp and this was big performance boost for me but maybe you have a different experience from your efforts?


Yes. (I'm not going into a pissing contest.)



I don't remember this being the case, you could reuse your old MC purchase when they made the transition over.


the mojang to minecraft.net transition, yeah. the minecraft.net to microsoft transition, no. https://youtu.be/rUFDRAEducI


The only thing I don't like about this is the focus on x86 assembly, which is a sinking ship because RISC-V is coming to eat its lunch, FAST.


I could understand if you wrote arm, because that's an architecture with actual marketshare. arguably more marketshare than x86-64 at this point, but you had to choose risc-v for the lols.


Where are the high performance RISC-V implementations? Those that compete with AMD Zen-5 and Apple M4? Or at least AWS Graviton 4?


The Tenstorrent folks are working on that.


HackerNews does not reflect real world well


The unwritten rule of HN:

You do not criticise The Rusted Holy Grail and the Riscy Silver Bullet.


How would you define "fast"?


In relative terms, compared with similarly priced and powered devices on the market. RISC-V does lag behind the others - ARM, x86/64 - here, at least for now.


Not eating. Only drinking water or zero calory drinks such as black coffee.

Only while fasting can a person think clearly. When thinking clearly, RISC-V is inevitably chosen as the ISA.

Fasting will also eventually make you hungry. Thus "RISC-V is coming to eat its lunch, FAST."


Doesn't RISC-V use vector stream processing instead of SIMD? That's a poor fit for ffmpeg.


I should say, I think it would be. I haven't actually tried it and know ARM has added it too, so it'd be interesting to see for sure.


Wake me up when a RISC-V processor is on par with an N50.


This is basically irrelevant now that better ISAs like RISC-V have a fixed instruction length (2 or 4 bytes) so the fancy algorithm here isn't necessary.


That fancy algorithm is relevant to RISC-V (and in fact, most fixed-length ISAs) because loading an immediate into a register needs one or two instructions depending on the immediate; you surely want to elide a redundant LUI instruction if you can. Of course such redundant instructions don't harm by itself, but that equally applies to x86 as the algorithm is an optimization.


As a result of RISC-V existing, all x86 processors have ceased to exist or be produced.


Accurate, if said sometime in the future rather than today.


There are still people making z80 machines today, so no.


This same problem applies to RISC-V with the C extension, because the J and JAL instructions have a larger range than the C.J and C.JAL instructions.


Having fixed instruction length doesn't make the need to load large constants magically disappear. These just get split between multiple instructions. If anything, RISC-V might be worse. See also https://maskray.me/blog/2021-03-14-the-dark-side-of-riscv-li....


ARM would have been a better example because the amount of people that care about RISC-V is a rounding error compared to x86 or ARM.


> Qobuz removed a range of releases a couple months ago at short notice, including from users' accounts.

What was the deal with this?


It wasn't explained officially. I assume some distribution arrangement changed but the artists/releases were so varied and from various labels that I can't determine the relationship. I'd bought a handful of releases there and all of them were affected.


Losing muscle


Bodybuilders were the first people to start intermittent fasting after the mouse study. The title of the front page of https://leangains.com/ is "Leangains - Birthplace of Intermittent Fasting"

April 14, 2010: https://leangains.com/the-leangains-guide/


The Milk-V Pioneer has 64 out of order cores and supports 128GB of ECC memory!


Its Sophon SG2042 SOC has about the same per core performance as an A72 like in a Rpi 4 or Graviton 1 from 2018...


I don't know why people especially RISC-V to already be on the level ARM and x64 is. The fact RISC-V even exists to begin with is amazing.

My opinion is definitely biased, though. Only time will tell


The fact that large corporations like Google and Facebook have incentives to have a better alternative to x86 and ARM for the data center is very beneficial too, and can only speed development up.


Missing RISC-V


That's the next arch I want to add but it takes a bit of work, sooner or later I will add it though :')


I'm missing s390x.


Yeah, it seems odd that it has PowerPC but not RISC-V.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: