I've used it in the past (for a quite small deployment I must say), but have been very happy with it. Specifically the diff mode is very powerful to see what changes you'll apply compared to what's currently deployed.
This might be a bit of a shameless plug, but because you ask :)
Regarding no sbt, I would highly recommend to have a look at the mill build tool https://mill-build.org. I personally find it very pleasant to use as a replacement for sbt, mostly because of the following points:
- it uses regular Scala code to define a build
- it isn't DSL-heavy like sbt, which means it's A) easy to debug by simple searching, and B) anyone unfamiliar can get a vague idea of what's going on at a first glance
- it is designed around a generic task graph and isn't centered around the JVM world (disclaimer: I'm helping with development, and recently first-class support for Python and JavaScript projects was added)
The article mentions they used an experimental language feature, which is enabled via a special compilation flag and came with a big warning from the start that it was in fact experimental and could be removed any time. I highly doubt that most people use any libraries that use those.
As a side note, I've personally had the experience of migrating several projects to Scala3 (when it was still called Dotty), and never had any issues with 3rd party dependencies, since Scala 3 is binary-backwards compatible with Scala 2.13
AFAIK, it's not strictly true that unicast addresses are required to use a /64 network identifier.
It's common, almost necessary even, for environments with dynamic clients to use /64 subnets (precisely so that SLAAC works), but in a static environment it's perfectly fine to use prefixes larger than /64 (e.g. delegate a /80 to each individual host in a datacenter, for virtualization applications etc).
Hence, I'm wondering what the spec is you mention that is broken?
and to your point, yeah you can step outside the spec and things can work in controlled environments, but "there be dragons" when your dealing with interoperability on a large scale (in this case Android expecting /64s per the RFC)
One data point that really highlights this is that it is used in the flight software in Airbus planes. This alone is a very strong indicator that it is extremely robust, and also that it will be around for a very long time. There's a whole list of additional industry use-cases [1] that support this claim even more.
Location: Lausanne, Switzerland
Remote: yes
Willing to relocate: no
Technologies: compilers, distributed systems, developer tooling and devops. 15 years of experience in Scala, deep knowledge of the JVM. Also multi-year experience with Python, C, C++ and open to any other language.
Résumé/CV: https://www.crashbox.io/cv.pdf
Email: in the resume
> I think the biggest thing is that the minimum extra spend if you want in house managed infra is basically the salary of a full time sysadmin + hardware costs.
The cloud promises to make admin tasks easier, however I have never seen it eliminate the role of a sysadmin in practice. In my experience, most organizations that run their infra on a cloud still have dedicated admin roles (often called "DevOps" or infra teams).
Hence, I think that the claim that a sysadmin's salary is an extra expense for self-managed infrastructure is exaggerated. You may need more sysadmins for achieving the same features in a self-managed setup compared to the cloud, but it is not a linear scale.
I recommend to have a look at Mill. It's versatile, built on simple foundations, and implements many concepts from general-purpose build tools such as Bazel (but of course it was designed for Scala). It's easy to call it from various scripts too, and doesn't require you to design your project around your build tool.
At work we've used mill to first replace sbt and then also gradle in another project, and haven't looked back. It worked out-of-the-box for our JVM projects, and we trivially wrote custom "rules" for integrating cmake-based C++ projects into the build.
Seconded. Mill is a very versatile build tool. It mainly focuses around modeling builds as DAGs of tasks, and thus allows you to only rerun what is necessary after changes. Other tools like Bazel are built around this model too, but IMO Mill has "taste" in the way it is configured. It also gets out of your way, you can seamlessly integrate it into various shell scripts, and you don't need to make your project revolve around your build tool.
I find that one of the best things about using a managed runtime such as the JVM is the ability to get stack traces when things go wrong. When debugging, it is a considerable time saver to be able to determine the causality chain that lead to a specific failure.
Unfortunately, all libraries that abstract concurrency on an application-level break the ability to get meaningful stack traces. At least all the ones I know of, including ZIO, Akka, Monix, plain Futures, etc. I know that there is tooling to counteract that (such as the abstractions used in distributed tracing), but that's again on the language level.
In my experience, for all but the most advanced applications, the debuggability advantages of using linear code outweigh the performance advantages gained by abstracting over execution contexts. Thus, I would posit that concurrency is best dealt with on the platform, not the language level, especially when starting a project.
Of course there are some situations where a library can make some concurrency task appear trivial, but as long as there is no good tooling, the time saved using beautiful abstractions tends to be paid back 5-fold when those abstraction break (which they often do as an application grows).
I've used it in the past (for a quite small deployment I must say), but have been very happy with it. Specifically the diff mode is very powerful to see what changes you'll apply compared to what's currently deployed.
reply