Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Hello HN - thanks for giving our website a proper stress-test!

I'm the developer of SOUL - happy to answer any questions you guys have.. :)



Hi. Quick fly-by suggestion for you: look into the language server protocol by Microsoft if you haven't already.

In the golang community for example Google has recently decided to build and officially support a language server (called 'gopls'). How it works: You build a language server once and get features like typecheck errors, auto complete, go to definition, code peaking, documentation on hover, etc... in all editors/IDEs that support the protocol.

LSP Website: https://microsoft.github.io/language-server-protocol/ Recent golang talk: https://youtu.be/bNFl7HcyDao?t=354


Yeah, this is on our to-do-list!


Cool :)


Hi Jules, I posted this here; I'm so eager about your project moving forward !

Is there going to be a marketplace for professional SOUL-based VST plugins ?

A way to browse all open source SOUL scripts and share modules so that the community will be able to build on top of previous scripts (without copy pasting all the time) ?

Will all SOUL scripts be runnable in the browser like Shadertoy.com, if no what will be the separate features between web / desktop / hardware ? How are you going to handle the diversity of situation where one could use SOUL ?

Thanks (-:

Side note : http://soul.dev/ doesn't have https ?


Our long-term goal with plugins is to make them much easier to write in SOUL without needing to actually compile anything natively.. as for a marketplace to deliver them, we'd certainly love to be able to offer one of those, and will see how things go.

Yep, we'd love to make soul.dev into a place where you can browse snippets of code, search for people's implementations of various DSP algorithms and try them out and put them together - that's all definitely in the plan!

And as far as cross-platform-ness goes, I don't currently foresee a situation where any SOUL code wouldn't run on all platforms.. Running it as WASM in a browser may not be particularly quick compared to the same code running via LLVM JIT or on a DSP, but it should still work.

(And yes... soul.dev certainly uses HTTPS.. maybe whoever posted this story typed it without that)


Thanks for the reply. Suggestion for the name of the marketplace or community : "Soulfly" sounds good (and probably need to separate it from the "infrastructure" / dev project ?)


Well, we also have this URL: https://audio.dev :)


so this is gonna seem kinda weird, but I noticed y'all run the audio developers conference.

I own the domain adc.io, for a project that went nowhere years ago, so now I mostly use it for some personal things. would you guys be interested in doing a trade or want to buy it or something? I feel kinda trashy asking that here since it doesn't add to the discussion, but I like what y'all are doing, and didn't even know that conference existed until I clicked that link.


thanks for the offer but I think we're a bit overloaded with URLs right now!


> Our long-term goal with plugins is to make them much easier to write in SOUL without needing to actually compile anything natively

could you offer to compile/package SOUL code into a VST as a service?


Yes, we already have a prototype C++ generator as a command-line tool, so mixing that with some boilerplate to build a plugin will be something we do soon.

And yes, the same thing would also work nicely if hosted as a web-service


To LV2 also would be awesome if possible!


Please consider making dynamic graphs a key feature for the 1.0 release! I like how easy it is to use SOUL to build simple processes, but I'd like it a lot more if I didn't have to specify my graph at compile time.

A few more questions:

- How will the language/API/reference VM be licensed?

- Will the API depend in any way on the JUCE ecosystem?

- Do you expect users to bundle the VM with their plugins/applications or to have one installation on a target machine? Or will plugins need to be supported by a host that embeds the VM?


Not sure that being able to change the graph dynamically is a particularly good way to work with this kind of thing. In almost all synths, and even DAW engines, they only modify the graph when you modify the project, not while things are running, and that's when we expect you'd do a SOUL recompile. There's a much longer discussion to be had over this which I won't dive into now, but we have thought about many possible use-cases and how they'd be done.

Licensing: very permissive for developers; probably commerical deals for companies who want to ship SOUL-compliant hardware or drivers

JUCE dependency: no, we'll want to make this as vanilla as possible, to encourage its use in many ecosystems. There'll be JUCE integration, but also stand-alone C++ and a flat C API so it can be wrapped in other languages like python, C#, Java etc

We'll offer an embeddable JIT VM, but our end-goal with this is for there to be drivers in the OS or external hardware which does the work, and the API would just send them the code (like e.g. openGL shaders)


It would be very helpful to have some way to express graph changes due to events in the language, even if it doesn't happen in real time. Otherwise we're going to have to get really hacky with generating SOUL on the fly and recompiling it, rather than just explicitly stating that graph changes happen at the discretion of the runtime.

RE the licensing, I'm asking more about the language itself, the IR, and whether it is going to be possible to develop independent implementations.

Lastly because I always forget to ask about the IR - why not WASM?


I think we'll probably want hot-reloading of sub-graphs in a larger graph, but that'll be a project for 1-2 years down the line when we're more involved in building DAW engines using it.

Re: building a 3rd party back-end, I guess that might be technically possible but we'd rather keep control of that side of things. We've not fully decided our approach there yet.

Why not WASM? Well.. quite a few complicated reasons, involving things like the ability to generate continuations, and to get good auto-vectorisation, and being portable to weird DSP architectures. Also, the system can't just be a straight program, it had to be proprietary, at least at a high-level. And secure, so LLVM IR also wasn't quite the right shape for it. We sweated over this decision, believe me!


You mention custom DSP hardware licensing ... any reason not to target the GPU directly using CUDA or whatever?


I've looked a tiny bit into this sort of thing (using Metal compute shaders), and the problem is that, even with all of Metal's optimizations around CPU/GPU synchronization, the overhead seems to be too high for real-time audio DSP in practice.

The thing I tried experimentally implementing is basically a 128x128 matrix mixer with a little bit of extra processing. On a three-year-old MacBook Pro, the GPU barely had to lift a finger to crunch the data, but the round-trip latency was still high enough that it would struggle to keep up with anything less than a buffer size of 512 or so at 48kHz (which is on the high end for live mic processing). It would be fantastic for offline processing with larger buffers, though.

I haven't tried CUDA or OpenCL, so I don't know if the situation is the same there—but of course they have the problem of vendor support as well.


Our original thoughts on that was "nah, that'd be silly because GPUs are the wrong shape for audio workloads"

..however, having a few conversations with more knowledgable people has changed our mind and we're definitely going to give it a try on Metal and Vulkan to see what happens. Should be interesting.


I can understand the aversion to GPU (with "Graphics" being a primary point). Most ways of interfacing with the GPU use x,y,z, color, lighting wrappers around what is essentially a super powerful vpu. I also wonder how the traditional pipeline (e.g. vertex shading, fragment shading) can be repurposed for audio.

I don't have any experience with Metal or Vulcan but my intuition is that audio DSP is going to include a healthy dose of linear algebra and multi-variate calculus. That points towards some kind of fit with the GPU. Given that basic GPUs are available on practically every device (including phones) it seems like a fantastic fit. Even audio hardware developers would benefit since it would open up access to commodity priced chips (rather than custom asic/fpga/whatever).


Yep.

And even for very simple audio loads, there's often unused capacity in compute cores (even when running games) which would could do the job "for free" without bothering the CPU.


I'm curious what you mean by "wrong shape"? Because GPUs are for for very data-parallel workloads? Or because the latency is high?


Yeah, we made both of those assumptions. But it's apparently less true of the latest generation of compute engines. We probably won't bother with CUDA, but Metal is certainly a viable platform to try. It's one of those things where nobody really knows how it'll do until you try it.

Also, we do know a few people who want to use SOUL to write audio code which does need high parallelism. It's not a super-common use-case for audio, but it does exist.


I hope you'll consider writing up your results in a blog post or something! As I just mentioned elsewhere in this thread, I have experimentally found latency with Metal compute to be too high to be feasible, but I would dearly love to be proven wrong, if there's a trick I'm missing.


Yes, we'll certainly be writing it up. Similarly i'd be interested in reading about your experiences attempting the same if that's available.


Hi. Great talk. Basically introduced me to audio development! Given you're experience and the future you're talking about with Soul or a similar language being how audio development is done... what resources and approaches would you recommend someone completely new to the domain study?


Audio dev has always been insanely hard - that's part of the motivation behind creating this!

It's hard because you need some serious mathematical skills to understand the DSP itself (that's the bit I'm lacking in..).

Then if you're building a "real" product, you're going to have to write in C++ and have a rock-solid understanding of concurrency, real-time coding and many other very tricky subjects which take huge amounts of experience to get good at.

I've been doing this for over 20 years so have lost sight of how beginners should learn it.. But most people seem to just dive in with an idea they want to build, and start trying to swim! It can be painful to see people using JUCE/C++ who don't really have any interest in C++ for its own sake, but who are struggling to get anywhere without putting the effort in to learn the language properly.


I wonder if the SOUL team has considered looking at Julia, which aims to solve the two-language problem, in which programming languages are either fast or easy. It's in the context of scientific computing, which is usually almost entirely about throughput and not about latency. But I think there's a lot in common.


At a high level it might seem like that, but there's a big difference between "fast" and "realtime". We need to go fast, but the main problem the language needs to solve for us is to break the work down into a steady, realtime stream. Overall throughput is Julia's strong point, but that's not the same thing at all.


I would argue that Julia's strong point is generating specialized code, and facilitating generic programming. Throughput is definitely the driver, but I think several aspects of Julia's design could benefit real-time programming as well.

Currently, there are many blockers to using Julia in real-time application, such as dynamic memory allocation and lack of thread safety. But I find it promising that a subset of Julia could be used for real-time programming.


Hi julesrms,

Some quick 2-second feedback - I tried running the default example in Brave, it failed, and then as a good user I wanted to report the bug to the community, but that requires finding the ROLI forum and signing up (providing DOB!) etc. I think it would be great if it were a bit easier to submit bug reports without as much hassle.


You could report the issue here: https://github.com/soul-lang/SOUL/issues

(We might open the soul.dev website too at some point, but that's private for now.)


I've just tried and it worked for me on OSX with Brave.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: