Occam and Lisp are both developed from the Lambda calculus and quite close relatives. The main difference is that Lisp is strong-dynamically typed (like Python), and Occam is statically typed (like, say, Java).
Occam was based on Hoare's CSP rather than Lambda Calculus and the relationship between your average Common Lisp program and Lambda Calculus is pretty distant.
> This same phenomenon is the reason why John Ousterhout wrote Tcl and the GNU project started Guile.
Except that Guile is an implementation of Scheme, which is essentially a somewhat minimalist variant of Lisp. So in this case, a real programming language was included intentionally as configuration language.
And this is the same concept as why GNU Emacs is an editor written in Lisp on top of C routines - very similar how data science is often done in Python on top of Numpy and several other packages.
> Async Rust is especially problematic in the enterprise world where large software is built out of micro-services connected through RPC.
A weird way to use Rust since you can do a lot of messaging within the process, and use the computing power much more efficiently.
RPC is essentially messaging and message-passing. Message-passing is a way to avoid mutable shared state - this is the model with which Go became successful.
RPC surely has its use but message passing is another, and very often inferior, solution to the problem set where Rust has excellent own solutions for.
> We want to use the whole computer. Code runs on CPUs, and in 2023, even my phone has eight of the damn things. If I want to use more than 12% of the machine, I need several cores.
Isn't that already, in this strong generality, an almost always wrong assumption?
Sure, one can do massively parallel or embarrassingly parallel computation.
Sure, graphic cards are parallel computers.
Sure, OS kernels use multiple cores.
Sure, languages and concepts like Clojure exist and work - for a specific domain, like web services (and for that, Clojure works fascinatingly well).
But there are many, even conceptually simple algorithms which are not easy to parallelize. There is no efficient parallel Fast Fourier Transform I know of.
And there are even different degrees of parallelization. Some things will scale almost linearly to CPU cores, some will share a little state and see diminishing returns, some will share a lot of state and maybe only make good use of 2 cores, and it'll all depend on the hardware too.
> When sandbox builds are enabled, Nix will setup an isolated environment for each build process. It is used to remove further hidden dependencies set by the build environment to improve reproducibility. This includes access to the network during the build outside of fetch* functions and files outside the Nix store. Depending on the operating system access to other resources are blocked as well (ex. inter process communication is isolated on Linux); see nix.conf section in the Nix manual for details.
> Sandboxing is enabled by default on Linux, and disabled by default on macOS. In pull requests for Nixpkgs people are asked to test builds with sandboxing enabled (see Tested using sandboxing in the pull request template) because in official Hydra builds sandboxing is also used.
> To configure Nix for sandboxing, set sandbox = true in /etc/nix/nix.conf; to configure NixOS for sandboxing set nix.useSandbox = true; in configuration.nix. The nix.useSandbox option is true by default since NixOS 17.09.
This appears to use namespaces etc (basically containers) rather than a VM but I think it may be secure. Their goal is to aid reproducibility but if the network isolation actually works, then at least the build will be secure.
Note that an infected source may either run malware during build, or embed malware in the compiled binary (or both). When running the binary you're not protected at all by this sandboxing, unless you use something like Qubes (which is quite heavyweight)
Guix builds are sandboxed per package (I'm pretty sure it cannot be turned off at all). The Guix build containers don't have network access.
Guix package definitions include a cryptographic hash of the source, don't autoupdate and have people review when there is an update.
The Guix package definition includes what dependent packages this package needs. These dependencies will be built first and the result made available for the Guix container of the final package build. Nothing else is available in there.
My local food store still give me sticky small olive labels which can be collected in a booklet, which earns an discount once it is full. I love that. And they ask me every time if I want my olives.
Now, when there are riots, it is well known that police has human "super-recognizers", which can scan thousands of photos and identify suspect individuals on a whim.
Yet at the same time that one of these persons has this ability and ends up working for the police is a low probability event, the police only tend to use said profiler if the event warrants the expense. When a service moves from human ran to technology ran it generally goes from "I need to do this special thing" to "We should just leave it on and do it all the time, the IT budget takes care of that".
If they don't, that's the best startup idea I've heard on HN in a long time. I mean, it's evil and everything, but you could make a lot of money probably. Government contracts would probably pay out too.
The problem is convincing big store chains to put a new dollar barcode scanner machine in their checkout lines at every location, to do something that they have no trouble doing right now with credit card details, rewards cards, and bluetooth tracking.