Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> On my site, I have a list of DAWs and a simple Pro/Con view of each

You reviewed three DAWs. Glad you put the link to Admiral Bumblebee's page on yours, because that's actually "a list of DAWs".

Re: MIDI - you can already "play between the notes" using either MTS or even just simple pitch bend - the issue with the latter is building a controller that makes this better than the wheel on most keyboards, the issue with the former is finding synths that understand MTS and use it.

> no good method for hardware acceleration is a limiting factor in the audio world

DSP processors have been available for this since before ProTools, which was fundamentally based on hardware acceleration. In the present era, UAD and others continue (for now) to carry that torch, but the principle problem is that during the period where Moore's Law applied to processor speed, actual DSP could not keep up with generic CPUs (every DSP system was the speed of 1 or 2 generations ahead of generic CPUs). Current generic processors are now so fast that for time domain processing, there's really no need for DSP hardware - you just need a bigger multicore processor if you're running into limits (mostly - there are some exceptions, but DSP hardware wouldn't fix most of them either).



> You reviewed three DAWs.

There's definitely more than 3 reviewed there? Sure, a lot of them still say [TODO] on getting a review, but then I don't want to review something I don't have a significant amount of 1-on-1 time with. Maybe you didn't realize there are click-able tabs?

Also, the way you phrased this came off as quite rude. There is someone on the other side of the screen, and you'd do well to consider that in the future.

> Re: MIDI - you can already "play between the notes" using either MTS or even just simple pitch bend [...]

Pitchbend is global to the track, if you play a chord and bend, all of the notes bend equally. With MPE it is possible to bend a single note, but then MPE isn't supported in every DAW (FL Studio doesn't have it) and my gripes with MTS are explained already: You're still working with only 127 possible notes so you limit your octave range. Worse, with all of the microtonal solutions, the UI will still typically look like a 12-tone piano. This is sort of okay for 24TET, but It's immensely confusing for anything else.

> DSP processors [...]

They really haven't been true for audio for a while now. Moore's law isn't dead, but definitionally "Moore's law is the observation that the number of transistors in an integrated circuit (IC) doubles about every two years" doesn't equate to all work loads seeing an improvement. Audio is a pretty latency sensitive, mostly strictly sequential workload. Making audio code that uses multiple cores often isn't even possible. Audio needs clock speed and IPC gains, which we have gotten, but that's not always enough. I have a 5900x and still hit limits. What would you recommend I do, get a Threadripper or Xeon so that I can have even more cores sit idle when making music? If anything, the extra cores have been a hindrance lately - on my 3900x I had before at high loads I had to pin the processes to one chiplet or I'd me more likely to get buffer underruns. It's not as if anyone is arguing that CPUs getting faster so quickly means that we don't need graphic cards.

UAD exists but then you're limited to their plugins and their accelerators are quite expensive for not being really all that powerful. I'm also not convinced that kind of accelerator is even the right approach. Field Programmable Analog Arrays, FPAAs, for example, could be setup with a DAC and ADC on either end. Or we could make DAWs/OSs capable of handling digitally connected analog "plugins" better - think effects like the Big Muff Pi Hardware Plugin [1] or Digitakt with Overbridge [2] (These are the only two examples I know of!). Using the word "Acceleration" was wrong, what I really meant is offloading. We need a way to offload the grunt work to something we can easily add more of or better fits the task. I think this is particularly true of distortions, as slamming the CPU with 4 or 8x oversampling to get a distortion to not sound awful hurts.

[1] https://www.ehx.com/products/big-muff-pi-hardware-plugin/ [2] https://www.elektron.se/us/overbridge


Sorry, my mistake. Four DAWs: Live, Bitwig, FL Studio, Reaper. We'll have to agree to disagree on whether or VCV (which I use very regularly) or trackers are a DAW or not.

I agree with you about pitchbend, but you're narrowing what "play between the notes" means: you seeem to mean "polyphonic note expression", which is a feature that quite a few physical instruments (not just piano) lack.

MPE doesn't need to be supported by the DAW, only by the synthesizer. It's just regular MIDI 1.0, with different semantics. It's more awkward to edit MPE in a DAW that doesn't support, but not impossible. Recording and playback of MPE requires nothing of the DAW at all.

> the UI will still typically look like a 12-tone piano

We just revised the track header piano roll in Ardour 8 as step one of a likely 3-4 step process of supporting non-12TET. Specifically, at the next step, it will not (necessarily) look like a 12-tone piano.

> Audio is a pretty latency sensitive, mostly strictly sequential workload

It's sequential per voice/track, not typically sequential across an entire composition.

IPC gains are not required unless you insist on process-level separation, which has its own costs (and gains, though mostly as a band-aid over crappy code).

If you're already doing so much processing in a single track that one of your 5900X cores can't keep up, then I sympathize, but you're in a small minority at this point.

Faster CPUs don't help graphics when the graphics layers have been written for years to use non-CPU hardware. Also, as you sort of implicitly note, there's a more inherent parallelism and also decomposability of graphics operations to GPU-style primitives than there is for audio (at least, we haven't found it yet).

Offloading to external DSP hardware keeps popping up in various forms every year (or two). In cases where the device is connected directly to your audio interface (e.g. via ADAT or S/PDIF), using such things in a DAW designed for it is really pretty easy (in Ardour you just add an Insert processor and connect the I/O of the Insert to the appropriate channels of your interface. However, things like the BigMuff make the terrible mistake of being just another USB audio device, and since these things can't share a sample clock, that means you need software to do clock correction (essentially, resampling). You can do that already with technology I've been involved with, but there's not much to recommend about it. The Overbridge doesn't have precisely the same problem in all cases, but it can.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: