Big fan of your OS X stuff! I still have a 11" 2012 MacBook Air with Core i7 and 8 GB RAM, if only I could use it as my everyday machine I'd be very happy!
I didn't know about Firefox Dynasty, it's very cool.
(Wow, and Archive Plus and Preview Plus… you're killing it!)
What's your experience with those shady USB-C to Magsafe 2 cables/adapters? Old MacBook batteries are bound to deplete faster and faster every year, it'd be nice to be able to charge using what's most common nowadays.
I wouldn't trust the adapters. Maybe in theory it's possible to do it right, bridge PD negotiation to a 1-wire protocol negotiation but you've got to trust that they handled both the usb-c part correctly and the 1-wire part correctly.
There's not much rigorous testing I could find online, https://www.youtube.com/watch?v=Qg6pPDys-s0 tested a few and found that some spark when first connected, which implies they're not doing the 1-wire negotiation properly (real adapters will prevent arcing by sensing if it's connected first before increasing voltage)
> What's your experience with those shady USB-C to Magsafe 2 cables/adapters?
I don't have experience--I would probably try to get a real one. I'm also a big proponent of getting batteries replaced. I know this all costs money, but if you like and are using this computer, make it nice!
I've got this idea that third-party batteries are mostly crap and die very fast, but it's based on ≈10 year old experience (although the last official battery on my iPhone SE 2016 was the same)
Indeed: your comment made me curious, so I've read the article although I wasn't initially interested, and I gained a nice point of view about conversational interfaces to LLMs and how friction, usually a pure HCI concern, is being weaponized to bring about a profitable and inhumane world. I didn't need to read this today but will surely remember or think about at some point.
Thanks for wasting my time! You might just want to exert a little bit more effort next time to explain why you felt that way, which would make your hard-earned piece of advice a useful contribution to this place instead of that weird way of reminding everyone that you exist at the detriment of others.
I've been designing and developing digital musical instruments and synths for more than 10 years, including two commercial products used by professional musicians around the world.
Combined with a background in HCI research, I offer strategic assistance, interaction design, and software development services to companies interested in building reliable interactive products, be they hardware or software.
I work using a positive, humane, and mindful approach aimed at understanding the underlying needs of stakeholders and making the best of a situation. I try to avoid dogmatism in my own thinking but at the same time respect existing rules and processes.
I'm currently working with a handful of small companies as well as a public research institution, and I'm looking to expand my network outside of France.
If you have needs that intersect my background and interests, or if you want to chat about making fun, profound, and expressive tools, or anything pertaining to HCI, software, DSP, or music, don't hesitate to contact me!
The EaganMatrix (inside the Osmose) and the Hydrasynth are both great and each one has its own approach. I think the Anyma synths are less beefy, in terms of computational resources, but the synth engine offers more kinds of modules, more freedom in some way. Not that it's always useful to have 16 LFOs or envelopes, or to be able to modulate the curve of a mapping, but it sometimes makes trying an idea easier during sound design. As we started with a wind instrument (Sylphyo), we also take special care to make support for this kind of MIDI controllers effortless.
The synth engine in the Anyma Phi runs on a STM32F4. The UI and MIDI routing runs on a separate STM32F4. No RTOS, we find it much easier to reason with cooperative multitasking, and easier to debug. So far, we don't have any latency/jitter issue with this approach, although it required writing some things (e.g. graphics) in a specific way.
The Omega runs on a mix of Cortex-A7 and STM32.
I have a pure software background but I came to appreciate the stability, predictability and simplicity of embedded development: you have a single runtime environment to master and you can use it fully, a Makefile is enough, and you have to be so careful with third-party code that you generally know how everything works from end to end. The really annoying downside is the total amount of hair lost chasing bugs where it's hard to know whether the hardware or the software is at fault.
In contrast, programming a cross-platform GUI is sometimes hell, and a VST has to deal with much more different configurations than a hardware synth, you're never sure of the assumptions you can make. The first version of Anyma V crashed for many people but we never had the case on the dozen machines we tested it on.
Interesting prespective. I can definitely see how you have the immediacy edge over the pain of the EaganMatrix, and having different engines besides the core wavetabel-y of the Hydra is a win, IMHO - though, yeah, both fit different needs.
I'm mostly an embedded guy (Usually much lower power ST parts), so it's neat to hear about how you approached it. Having multiple chips separate so can't underrun as easily if the UI needs to react is really nice design!
I see a lot of your engine is modified from from Mutable Instruments, but you do have a good selection of original sound sources as well. What sets yours apart? Did you have a strong background in DSP before Aodyo?
I did a tiny bit of DSP and I've been exposed to the HCI/NIME community in the past, but that's it. Many modules in the Anyma are just reasonable implementations of clever formulae I didn't design but studied from papers :). And for the Mutable stuff, a lot of optimization work and tradeoffs to make. We are lucky to have a sound designer with a good ear.
That said, we've been working for a while on our own waveguide models (Windsyo and others), and we have found some tricks I've never seen elsewhere. There's a lot to explore, especially when looking for "hybrid" acoustic-electronic sounds.
For sure. I really dig those hybird sounds too. I'm particularly fascinated with the sounds that are more electro mechanical (See Korg's Phase-8, or Rhythmic Robot's "Spark Gap") so I'm glad to see more people trying to combine physical modeling and synthesis in smoother ways than just layering them.
Oh my. So, how much processing load are you typically at now?
You know your backers are, from what can be gained from the KS comms, (to put it mildly) not too convinced Aodyo will provide more than enough juice (!=JUCE) this time, for chaining up enough modules while guaranteeing 16 note poly? And this with a multi-timbral design?
(you might refer to your end of 2023 update, regarding the 4+1 core concept which had to be changed creating further delay, and so on)
We had to switch to a more powerful architecture and chip but the voices are still dispatched on several processors. It'll be enough to withstand 16-note polyphony at 125% load, with some extra power left because we don't use the second core yet.
Thanks. It's weird, we cross-compile using llvm-mingw from macOS and then run the Inno Setup compiler using Wine inside a fresh Docker image (Linux guest). I'm not sure how I could obtain more info on what caused both antivirus to trigger, but we'll look into it.
We use JUCE for building the app/plugin. It handles the GUI, the audio/MIDI devices and the plugin API. The synth engine was originally developed to run on a STM32F4 (what the Anyma Phi uses), so almost everything is purpose-built (with good old Makefiles).
On the hardware, we use an immediate-mode UI and it's hard to go back to something like JUCE, which is flexible but a bit quirky. I often write GUIs with Cocoa for our internal tools (simulators, DSP models, etc) during the development of our hardware products and it's a much more comfortable environment.
In 2019 I had an early version of the Anyma engine running on Dear Imgui, it was really fun, but it would have required too much effort to properly manage audio/MIDI/plugin aspects in a cross-platform way, and the backends were incomplete at the time. JUCE was too much of a time saver to ignore for a team of 1.5.
I'm curious, if you don't use C++ and JUCE, what is your stack?
A DIY breath controller like you describe is a fun little project and adds a whole new dimension to your playing. Most wind synths expect MIDI CC2 for breath input.
It doesn't look like LLMS supports VST3 [1], but we had a report of the Bitwig Linux version looking for a VST3 bundle, the new recommended format, instead of a single library file with a ".vst3" extension, so of I'm wrong the issue could be that as well.
Anyway, we're looking into building for LADSPA or LV2 for a future update.
I didn't know about Firefox Dynasty, it's very cool. (Wow, and Archive Plus and Preview Plus… you're killing it!)
What's your experience with those shady USB-C to Magsafe 2 cables/adapters? Old MacBook batteries are bound to deplete faster and faster every year, it'd be nice to be able to charge using what's most common nowadays.