It doesn’t make sense to lump python and Julia together in this high-level/low-level split. Julia is like python if numba were built-in - your code gets jit compiled to native code so you can (for example) write for loops to process an array without the interpreter overhead you get with python.
People have used the same infrastructure to allow you to compile Julia code (with restrictions) into GPU kernels
Is there evidence that minimizing finger movement is ergonomically desirable? It seems like "repetitive" is a key part of RSI, so making the exact same small motion over and over again may not be optimal.
I think about piano players, who obviously need to move their hands and arms a lot to hit the keys (and with more force). Definitely takes a lot more energy than typing on a computer keyboard, but is there evidence that it's any more or less likely to cause injury?
I have the same crank theory. I have shithouse typing technique - hands fly everywhere, whichever finger is closest, wrists move a fair bit, what's a home row? I stuck rubber o-rings under all my keys so bottoming out wasn't painful. My keyboard has the heaviest sprung switches i could find, and I'm still on the lookout for a heavier mechanism (e.g. literally a typewriter or piano type mechanism).
I also started learning piano at 4 and played daily until 25 or so. I still play other instruments but with different movements.
I am 35 and still have no hint of RSI or carpal tunnel (touch wood). I had a scare for a bit but turned out my mouse was just in a dumb position.
YMMV but the above informs my crank belief of 'move heaps, varied as much as possible, get strong fingers and forearms' being a viable approach.
N.B. A note on the bottoming out stuff: this was again inspired by my piano teacher who taught a technique of imagining pressing the piano keys 'through' the base, further than they move in reality. This was combined with the weight coming from your entire arm, fore, bicep, and shoulder, not from your fingers.
N.B.B. If anyone knows input methods that take this to extremes I'd love to know. I.E. something that involves moving your entire arm around. I've occasionally looked at jumbo-sized keyboard for those with learning and dexterity difficulties for example.
It’s interesting how you are probably saying you are scrawny for a westerner, but if you were the oppisite - an ’easterner’ - you wouldn’t be considered scrawny.
I bet there are people who would consider ’westerners’, however they define them, weak and scrawny. Pacific Islanders? Eastern Europeans?
Either way, our piano teachers must have come from the same school of thought and no RSIs here either. For fifteen years now I’ve typed on a macbook keyboard (tethered to a big display 90% of the time) which I’ve never bothered to upgrade from. No vim. A constant mix of the trackpad, Apple magic mouse (!) and Wacom tablet for input. Haven’t experienced any ergonomics issues whatsoever.
Precise causes of RSI are, as far as I know, still largely unknown, but (at least as far as current hypotheses go) there does have to be some kind of strain involved.
From a quick search, it seems piano players do suffer from a high incidence of RSI (e.g. https://pubmed.ncbi.nlm.nih.gov/12611474/, which also correlates it with smaller hands, i.e. more stretching).
I recently started learning the piano as an adult, and from what i've gathered from reading and watching videos, the 'folk wisdom' about how to avoid rsi-type injuries is to minimize tension in your wrist by maximizing relaxation of your fingers and wrists (especially the thumbs, which tend to get locked into a permanent state of mild tension on the piano keyboard). So you want to do your best to develop more finger independence, so that you can press with one finger while keeping the neighboring fingers relaxed. It's really hard.
If you can I'd recommend getting piano teacher and live lessons. I got one and improvement in posture was a biggest gain I got. That and some cultural push (e.g. there's no wrong music, only the music someone doesn't like, or put piano next to window to learn to not watch on own hands).
I’ve thought quite a bit about getting a teacher, but have hesitated for a few [somewhat legitimate] reasons. Among them, I’m partially disabled and can’t practice consistently. However, having a teacher berate me about my posture would actually probably help that at least a little bit, now that you mention in :-) Thanks for taking the time to share your experience and encouragement with me! Best of luck on your lessons as well.
About the time that I graduated from college, I'd developed bilateral tendinitis in both wrists. It took me years of physical therapy, habituating stretching, breaks, and new exercise, along with a string of different types of keyboards.
I learned Dvorak on a Kinesis keyboard. Years later, I realized that switching to the high-quality, consistent, mechanical Kinesis was 99% of the payoff.
If I were doing it over again, I'd have just jumped to something like a QWERTY Realforce with Topre switches.
For the record, the absolute worst keyboard was an early Microsoft 'Ergonomic'. The inconstant resistance absolutely tore my tendons up. Also for the record, the best thing to stave off injuries after healing was taking up rock climbing as a hobby.
Cool and surprising to see built-in support for the Snyderphonics Manta [1], which is a pretty niche controller. I wrote the `libmanta` library [2] that is vendored into sapf. Haven't touched the library in a few years (though I still use my Manta), so it feels good to see it pop up!
I’m very curious about your experience doing audio on the GPU. What kind of worst-case latency are you able to get? Does it tend to be pretty deterministic or do you need to keep a lot of headroom for occasional latency spikes? Is the latency substantially different between integrated vs discrete GPUs?
Short answer: it has been a big pain in the butt. The GPU hardware is mostly really great, but the drivers/APIs were not designed for such a low-latency use case. There's (for audio) a large overhead latency in kernel execution scheduling. I've had to do a lot of fun optimization in terms of just reducing the runtime of the kernel itself, and a lot of less-fun evil dark magic optimization to e.g. trick macOS into raising the GPU clock speed.
Long answer: I've written a fair bit about this on my devlog. You might check out these tags:
Thanks for the extra info, I read through some of your entries on GPU optimization and it definitely seems like it's been a journey! Thanks for blazing the trail.
I saw that phrase and thought it was pretty weird. Hunting wild animals for food is not some fringe thing that happens in "other places" I've eaten tons of fish, duck, deer, elk, etc. that were all "wild animals".
Absolutely agree, but poison doesn't exactly spread like an infection. Plus I guess most herbivorous animals by now do have a fair bit of intuition what to eat and what not to.
I remember back in 2008-ish Johnny Lee at CMU built a cool hack that tracked the user's head using a Wiimote as an infrared camera, and used it for this kind of effect.
Turns out that head-tracking parallax is surprisingly effective even without stereo vision. I'd guess there's some component about the effect working best when your head motion is large relative to the distance between your eyes, and also best for objects far enough away from your eyes that you're not getting a lot of information from the stereo vision.
I don't know exactly where those thresholds are, but I wouldn't be surprised if a pinball machine is in a regime where it works well.
Say I walk into a machine, and then I walk out, and also an exact duplicate walks out of a nearby chamber. My assumption is that we’d both feel like “me”. One of us would have the experience of walking into the machine and walking out again, and the other would have the experience of walking into the machine and being teleported into the other chamber.
Im probably lacking in imagination, or the relevant background, but I’m having trouble thinking of an alternative.
You assume that both would feel like you, but there is no way you can prove it. The other can be a philosophical zombie [1] for all you know.
Would the "current you" feel any different after the duplication? Most people, including me, would find this counterintuitive. What happens if the other you travels to the other end of the world? What would you see? The question is not how the replica would think and act from an outside observer's perspective, but would it have the same consciousness as you. Would you call the replica "I"?
Or to make it more complex, what would happen if you save your current state to a hard disk, and an exact duplicate gets manufactured 100 years after you die, using the stored information?
Like GP, I feel that I might be imagining imagination here, but I really don't follow what this is supposed to reveal.
>Would you call the replica "I"?
The two would start out identical and immediately start to diverge like twins. They would share memories and personality but not experience? What am I missing here?
I understand what the author means, though I struggle to express it as well. The best I can come up with is this: What defines I? Is it separated from "I" and if so how? Or does I merely appears that way because our perspective is informed by our limited being?
It seems to me that this ascribes an existence to “I” that is separate from the brain; with no evidence for this existence, that makes it mystical/magical thinking, a.k.a. superstition.
Not really. The "vertiginous question" is just that, a question. We can't call a question superstition because we don't have a good answer for it yet.
For example, we can't call the question "why does gravity exist" superstition either. It's a valid question. We can feel the gravity, measure it, and forecast it, therefore it exists, but we still don't have a concrete answer as to what causes it. We don't assume that there is a metaphysical explanation, but we don't know the actual answer either. Similarly, the vertiginous question is a meaningful question, even though we don't have an answer.
> Doing brute force evaluation on 1024² pixels, the bytecode interpreter takes 5.8 seconds, while the JIT backend takes 182 milliseconds – a 31× speedup!
> Note that the speedup is less dramatic with smarter algorithms; brute force doesn't take advantage of interval arithmetic or tape simplification! The optimized rendering implementation in Fidget draws this image in 6 ms using the bytecode interpreter, or 4.6 ms using the JIT backend, so the improvement is only about 25%.
I love how this is focused on how the JIT backend is less important with the algorithmic optimizations, and not on how the algorithmic optimizations give a 1000x improvement with bytecode and 40x with JIT.
People have used the same infrastructure to allow you to compile Julia code (with restrictions) into GPU kernels