My initial goal was to fix some mistakes in the MIDI files I recorded from my keyboard. I was also interested in making dynamic tempo and expression changes without dealing with complicated DAW GUIs.
Now I'm working on a synth that uses MTXT as its first-class recording format, and it's also pushing me to fine-tune a language model on it.
I feel few points weren’t addressed in the article.
1. Size, biggest problem with JSON can happen when things gets too big. So here other formats might be better. Yet, as a reminder JSON has the binary version named BSON.
2. Zero batteries. JSON is readable by humans but also almost self explanatory format. Most languages has built in or quick drop in for json. Still, it’s easy to implement a limited JSON parser from scratch when in need (eg. Pure on func in C on a tiny device).
Working with Protobuf and MsgPack in the past, You have much more tooling involved especially if data passes between parts written in different languages.
3. Validation, JSON is simple. But there are solutions such as JSON Schema.
It is a JSON superset with binary encoding, created by MongoDB. (And there is even a JSON encoding of BSON, called extended JSON.)
AFAIK there is no widely adopted binary pure adaptation of JSON. (There are application-specific storage formats, like PostgreSQL JSONB, or SQLite JSONB.)
——-
Moreover, JSON is relatively compact. BSON or other self descriptive binary formats are often around the same size of JSON. MessagePack aggressively tries to be compact and is, depending on the data. BSON doesn’t try to be compact, rather it improves the parse speed.
CBOR [0] is probably the closest to a widely adopted "pure" adaptation of JSON. It is still technically a superset of JSON, but it tries to be closely matched and frequently cross-references the JSON specs directly, especially because it is also an IETF tracked standard like JSON, and in so far as widely adopted is concerned is included in web standards like WebAuthn today. (For instance, you can't handle Passkeys without some amount of CBOR. The presumed next steps to wider adoption now that all browsers have internal CBOR encoders/decoders would be to add a web platform JS API for it as well.)
However, yes, JSON compresses extremely well even in ancient gzip, but especially in Brotli, and desiring compaction of your API responses alone isn't necessarily the best reason to prefer a binary encoding of JSON to just using JSON and letting compression do its thing.
Compressed jsonlines with strong schema seems to cover most cases where you aren't severely constrained by CPU or small message size (i.e., mostly embedded stuff).
I wouldn't say it's only speed.
I've been Firefox for years, but eventually ended up surrendering Apple eco-system. with Apple silicon, Firefox at least then wasn't sleeping that well, and the tab sync of FF between my devices was also less than I've desired.
So performance is general is more like it...
that includes not hurting my battery life.
I've used all 3 browsers (chrome/safari/ff) daily doing web dev for years now and I'm convinced Safari just feels faster as a cohesive Mac app, with the animations and what not, but isn't in general when using the internet day-to-day. FF is little different than Chrome/Safari.
Also as a dev Safari is becoming the new IE. I've had a whole suite of Safari-only bugs in the past 2yrs and lots of browser crash reports from users.
We're now in a mixed computing era that is shaping the future of computing:
Ignoring niche OSes (eg consumer electronics such as TVs/dishwashers/etc)
- PC (Windows, Linux, macOS)
- Mobile (to simplify, this includes phones, watches and ongoing AR / AI progress based around Android and iOS with some Meta)
Mobile already "broke" the rules, and we have locked down devices with simplified "app stores" and more complex off-the-market OSes since each device is a unique SoC combination many times with closed-sourced blobs.
Web did a major change for desktop (which I guess part of the assumption for ChromeOS). but there are still some scenarios where native APIs are needed.
On the other hand, current Desktop OS market is a mess, Windows is focusing on intrusive features and enforcing user account, Apple is all about "notarizing" and making desktop similar to mobile, and Linux is diverged with multiple variants.
I really hope for opinionated Linux distribution promoted by a big player (I've always hoped Adobe or someone in the right size will understand the need and their ability to get enough common products to it).
Having said that,
Linux did great advancement over the years. Many companies including closed source already have some support and also gaming made great advancement.
Anyway,
Making a "locked os" won't do much. So unless Google plans to shoot their own leg, they'll need to make it open enough.
Using C++ daily, whenever I do js/ts are some javascript variant, since I don't use it daily, and update becomes a very complex task. frameworks and deps change APIs very frequently.
It's also very confusing (and I think those attack vectors benefit exactly from that), since you have a dependency but the dep itself dependent on another dep version.
Building basic CapacitorJS / Svelte app as an example, results many deps.
It might be a newbie question, but,
Is there any solution or workflow where you don't end up with this dependency hell?
Don't use a framework? Loading a JS script on a page that says "when a update b" hasn't changed much in about 20 years.
Maybe I'm being a bit trite but the world of JavaScript is not some mysterious place separate from all other web programming, you can make bad decisions on either side of the stack. These comments always read like devs suddenly realizing the world of user interactions is more complicated and has more edge cases than they think.
What I love about this is how physical it is.
So yeah, there's some board running DSP. but the design is amazing.
It really relates to some recent posts also in HN about many objects loosing their physical UX. from an age of having buttons and tactical interfaces, everything became more touch based / app based which indeed cut price and allows easier updating. but also lacks some romance which is exactly what this device shows.
I have a few of those, and heaps of my friends gat sucked deeply into playing with them at the expense of whatever they actually came over for. "Nah, I'm not hungry. You should all eat without me, I'll just finish tweaking this groove."
They're aimed (as the company name hints) as a little older than 3 year olds.
One hint for the OP, it's so cool how you can plug those toys into each other in a chain, and have them all sync - so the percussion, base, and melody can all be programmed on threee different POs, and automatically play together.
I don't think it matters all that much. Any specific opinions are going to be more a natter of personal taste.
Get one aimed at rhythm/percussion, one that has some bass/lowend sounds, and something to make melodies.
I have a PO12, PO14, and PO16. If I was going to buy another one it'd be a toss up between the PO33 and the PO35. If I were starting from scratch I think I'd still get the PO14, I might get a PO24 or PO32 instead of the PO12, and I might get a PO28 instead of the 16. Or if said young middle schooler has access to sounds they can sample easily enough (like, say, Garageband on an iPad_ maybe jump straight to one of the sampling ones (PO33 or 35) for the lead/melody sounds.
Agreed. That's half the reason that no matter how accurate a virtual synthesizer can be (like the Mac App Moog Model D), there's just no substitute for being able to physically fiddle the knobs and dials.
I don't see why we cannot build an app that when connected to an external monitor switches to a "Desktop Environment". Maybe, even a hacked version of UTM[1] that exposes a fully functional OS on the monitor.
With the power of M-chips, this would cannabalize MacBooks via iPad Air / Pro. They are sitting on a golden cash flow and not willing to revolutionize computing again (as the iPhone did).
Just as a N=1, I would rather pay a recurring fee in the Disney-Netflix range to Apple to get more liberty in usage from my machines. But I think they don’t dare to go those routes, because they need the broad market base and cannot extract the current cash flow from a smaller base, while setting expectations that the Googles, Samsungs can copy.
Industry leaders dilemma. Apple currently settles on market differentiation via physical products.
Historically, cannibalizing has always been the right choice when it comes to such things. That was a major point of the first iPhone, that it was a full replacement for your iPod, which was instrumental in its success. All this thinking does is cloud ones judgement and let competitors succeed.
Not saying you are wrong, this may be the reason Apple operates nowadays, but I maintain it is shortsighted.
Two bits floating in my mind: I'm in management (different sector, totally different scale) and deciding to move forward against a market as a market leader is a really scary decision. We did and changed our proposition against a trend in the market. The market mostly followed our lead. Thats what we hoped for, but sure couldn't count on at the time of the decision. So we had to make sure to have all stakeholders involved in the risk - What if most of our customers just left? Then suppose you are in management for Apple. The stakes are massive. How would you communicate this shift?
The other one is: You should take the strength of your opposition into account when making bold moves. Android / Google / the brands fabricating the products I would say (no need for the old debate) are market followers. They are good at following and produce more technical diverse products, minus the margins. If you do not expect your opposition to make the bold move first, but do expect them to follow your bold move, I would argue you should be less likely to play bold moves unless you know they cannot follow you. So game theory I think also favors the status quo for Apple.
That said, the iPhone was more expensive than the iPod, and replaced 1 Apple device (plus a device made by someone else like Nokia) with 1 alternative Apple device. This had an expected increase in revenue per customer.
Replacing the MacBook + iPad with an iPhone + some dock accessories might reduce revenue per customer.
But Mac sales pale in comparison to iPhone, and are similar to iPad numbers. So whatever revenue they would lose by not selling Macs with macOS, they could easily make up from additional sales of iPhones and iPads with macOS.
Besides, they've increasingly been expanding iPadOS to have more desktop-like features, so it wouldn't be far-fetched to offer full-blown macOS on these devices. It's not a hardware issue at all at this point.
> With the power of M-chips, this would cannabalize MacBooks via iPad Air / Pro.
Only for the truly low end. The thermals alone are a serious difference, you can't expect an iPad-class device to support the same power dissipation as a legit MacBook.
The MacBook Air is a legit MacBook and not that much heftier than the iPad. With how powerful and efficient M chips are, they could work out just fine for a lot of people despite the more constrained thermals.
They're not doing it today because current Apple leadership doesn't have the same incisiveness as the one back when they were sacrificing their most successful product on the iPhone altar so the competition can't. And to be fair, Apple has a much stronger position with a wider moat then they did back then. So they can afford to give more time to the competition to compete.
> They're not doing it today because current Apple leadership doesn't have the same incisiveness as the one back when they were sacrificing their most successful product on the iPhone altar so the competition can't.
Apple wouldn't just sacrifice the entry-level MacBook product category and I'm not even sure about that - the look-and-feel of a "display with attached keyboard" (i.e. Thinkpax X1 Tablet-style) is vastly different from a bottom-heavy Macbook with actual hinges. The former isn't really usable as a literal laptop unless you got some seriously long upper legs.
The more important thing that Apple would have to sacrifice is the App Store cash cow and users not having root rights. On a iPad or iPhone I'm willing to accept that, but on a machine I actually want to do work? No way in hell.
But that's it right here. It just takes boiling the frog slowly enough. The high powered M-powered iPads are already testing the waters of what people will accept for work (I don't think they're aimed purely at content consumption like the "smaller" iPads). I think Apple can afford to wait because they don't need to cannibalize anything today, and because the replacement isn't strictly a superset of what it's replacing, it comes with the caveats you mention. As soon as the market is ready to tolerate more lock-in, it might happen.
Enough people do just emails/Teams/Office for work so plugging in an iPhone and turning it into a desktop with mouse, keyboard, and external screen(s) can tick all the boxes for usability. Or an iPad with keyboard since similar sized devices were historically used for portability. Most work devices are locked down anyway, no root, no software installation.
> Apple wouldn't just sacrifice the entry-level MacBook product category and I'm not even sure about that - the look-and-feel of a "display with attached keyboard" (i.e. Thinkpax X1 Tablet-style) is vastly different from a bottom-heavy Macbook with actual hinges. The former isn't really usable as a literal laptop unless you got some seriously long upper legs.
The iPad Pro with Magic Keyboard is just that and in my personal experience does very well even on shorter legs due to its weight distribution. Were Apple to go down the route of actually enabling Xcode, etc. on iPads, they'd likely invest a bit more into the ergonomics of course, but they are already there and not comparable to Lenovos efforts in that regard.
On the contrary, I'm sure they'd be more than happy to part with macbooks if they could retain their developers. But then you could probably kiss your binary freedom goodbye.
With the first iPhone Steve Jobs wanted a web based future.
Then jailbreaking got us native apps and AppStores and the rest is history.
I still remember my Nexus One running flash!
Anyway, I wonder if the web path would’ve been the chosen one, how Apple would’ve played the web standards that are crippled today especially on Safari.
So if it got it right,
This is mostly a way to have branches within a specific release for various levels of CPUs and their support of SIMD and other modern opcodes.
And if I have it right,
The main advantage should come with package manager and open sourced software where the compiled binaries would be branched to benefit and optimize newer CPU features.
Still, this would be most noticeable mostly for apps that benefit from those features such as audio dsp as an example or as mentioned ssl and crypto.
I would expect compression, encryption, and codecs to have the least noticeable benefit because these already do runtime dispatch to routines suited to the CPU where they are running, regardless of the architecture level targeted at compile time.
That's a lot of surgery. These libraries do not all share one way to do it. For example zstd will switch to static BMI2 dispatch if it was targeting Haswell or later at compile time, but other libraries don't have that property and will need defines.
Also, any apps that uses it would benefit from being add to the repo assuring usability in addition to readibility.
reply