it isn't, fundamentally can't be as it has no support for binary blobs
and base64 encoded strings are a pretty terrible solution for this
and the space overhead of many small number can also be pretty bad
and while a lot of IPC use cases are so little performance sensitive that JSON is more then good enough and large blobs are normally not send over messages (instead you e.g. send a fd) there are still some use-cases where you use IPC to listen to system events and you have a situation where a pretty insane amount of them is emitted, in which case JSON might come back and bite you (less due to marshaling performance and more due to the space overhead which depending on the kind of events can easily be 33%+).
But what I do not know is how relevant such edge cases are.
Probably the biggest issue might be IPC system mainly used on 64bit systems not being able to properly represent all 64 bit integers .... but probably that is fine too.
The protocol supports “upgrading” requests. If your service relies on sending large binary blobs (over this? Why?), then it doesn’t have to be done with JSON.
For example, the metadata of the blob could be returned via JSON, then the request is “upgraded” to a pure binary pipe and the results read as-is.
binary blobs is just the biggest example and was only mentioned in relation to the "lingua franca" argument, many other common things are also larger in JSON. Only if you have many larger not escaped utf-8 strings does this overheads amortize. E.g. uuids are something not uncommonly send around and it's 17 bytes in msgpack as a bin value and 38 bytes in json (not inlcuding `:` and `,` ). That 211% the storage cost. Multiply it with something going on and producing endless amounts of events (e.g. some unmount/mount loop) and that difference can matter.
Through yes for most use cases this will never matter.
I get your point, but you have to understand that for every second you’ve spent writing that comment, globally hundreds of millions of HTTP responses have been processed that contain UUIDs of some kind.
Yes, there’s a more optimal format than hex-encoding UUID values. However it simply does not matter for any use case this targets.
16 bytes vs 38 bytes is completely meaningless in the context of a local process sending a request to a local daemon. It’s meaningless when making a HTTP request as well, unfortunately.
I’d have loved Arrow to be the format chosen, but that’s not lowering the barrier to entry much.
it isn't, fundamentally can't be as it has no support for binary blobs
and base64 encoded strings are a pretty terrible solution for this
and the space overhead of many small number can also be pretty bad
and while a lot of IPC use cases are so little performance sensitive that JSON is more then good enough and large blobs are normally not send over messages (instead you e.g. send a fd) there are still some use-cases where you use IPC to listen to system events and you have a situation where a pretty insane amount of them is emitted, in which case JSON might come back and bite you (less due to marshaling performance and more due to the space overhead which depending on the kind of events can easily be 33%+).
But what I do not know is how relevant such edge cases are.
Probably the biggest issue might be IPC system mainly used on 64bit systems not being able to properly represent all 64 bit integers .... but probably that is fine too.