The original Wordle had a hard-coded ordering that was visible in the source. I had a toy around with the list (as did many other people) a few years back, you can see my copy of the word list here: https://github.com/andrewaylett/wordle/blob/main/src/words.r...
No, it let you continue to follow the main branch for most files, while files you edited would have their changes saved to a different location. And was just about as horrible as you might imagine.
We moved from VSS to SVN, and it took a little encouraging for the person who had set up our branching workflow using that VSS feature to be happy losing it if that freed us from VSS.
TC39 level three, and I don't follow TC39 that closely but it seems to be available everywhere except Safari and I'd not seen it yet.
I'm not entirely sure whether I like it — I probably need to try using it for a bit (pun obviously intended) before making my mind up. Making it a variable declaration style rather than explicit syntax (like Java's try-with-resources) makes me slightly uneasy, as it's a change in semantics from "normal" declarations. It's not like Rust or C++ where we expect destructors to be called at the end of the scope, and it feels like it's going to be hard to work out where a variable marked "const" should have been marked "using".
I definitely get the impression that Wolfram builds his tools primarily for himself, and is happy to let other people play with them because that way he gets money to pay for them.
That is not the impression, that is exactly why, And actually that is their strength. Back in the days the whole Apple was there to make software for Jobs and look how awesome that turned out. Wolfram is trying to complete tue work of Leibniz and create a universal calculus. A unifying language for symbolic computation, which is amazing.
I don't know about very rich — our spare room is set up as an office for WFH, along with a sofa bed, and I put a 100" projector screen on the wall opposite the sofa. A second hand projector, new (but not all that expensive) Denon surround sound system with speakers from an otherwise-junk 5.1 PC speaker set, and the experience is better than regular cinema. The best bit? I can turn the volume down as much as I want to.
BMW have one of the more annoying matrix main beam setups, as far as I'm concerned -- it's not great at picking out my car, and seems worse than others I've encountered. A redeeming feature is that it does seem to be smart enough to stop blinding me if I flash my own main beams.
The (2017) Ford Galaxy has actually pretty decent auto-main-beams. Importantly, the stalk controls don't stop working but also if I'm just a fraction of a second late in turning them off manually and the system beats me to it, they stay off. They also stay off when driving on roads with street lights.
That's how we use CDK. Our CDK (in general) creates CloudFormation which we then deploy. As far as the tooling which we have for IaC is concerned, it's indistinguishable from hand-written CloudFormation — but we're able to declare our intent at a higher level of abstraction.
The Scottish Government provides free at the point of use bus service for under-22s. I wish we could have had that 25 years ago when I first moved here. As it is, it's an ideal way for the teens to get around and makes it the sensible option when taking children into the city during the daytime.
Driving is prohibitively expensive for young people, and in the UK you can't drive cars on public roads until you're 17.
I'm very much a fan of the idea that language features — and especially library features — should not have privileged access to the compiler.
Rust is generally pretty good at this, unlike (say) Go: most functionality is implemented as part of the standard library, and if I want to write my own `Vec` then (for the most part) I can. Some standard library code relies on compiler features that haven't been marked stable, which is occasionally frustrating, but the nightly compiler will let me use them if I really want to (most of the time I don't). Whereas in Go, I can't implement an equivalent to a goroutine. And even iterating over a container was "special" until generics came along.
This article was a really interesting look at where all that breaks down. There's obviously a trade-off between having to maintain all the plumbing as user-visible and therefore stable vs purely magic and able to be changed so long as you don't break the side effects. I think Rust manages to drive a fairly good compromise in allowing library implementations of core functionality while not needing to stabilise everything before releasing anything.
> I'm very much a fan of the idea that language features — and especially library features — should not have privileged access to the compiler.
At some point I realized I was in the opposite camp and nothing I have seen since that has really changed my view point.
Languages: compilers, libraries, toolkits, etc, aren't supposed to be some abstract collection of parts that can be theoretically hooked together in any possible way to achieve any possible result, they're for solving problems.
You can argue that these things are not opposites, and in theory that's true, but in practice, they seem to be! Go is a good example of making compromises that limit flexibility for the sake of developer/designer convenience.
An interesting example would be Lego, I'd argue that Go is closer to Lego design because it has a bunch of specific pieces that fit together but only in the way the designer intended.
I suspect someone taking the opposite approach from, say, Rust, would argue that some Go pieces don't actually fit together in the way that we think all lego pieces should.
My counter argument is that not all Lego pieces do actually fit together and, like, you can't cut a piece in half to make a new piece that just doesn't exist. You're limited to what comes out of the factory.
On the other hand, much to my childhood self's chagrin, even when opening a fresh set and not mixing it with other Legos, there are still quite a large number of other ways to put the pieces to together. After multiple decades my mother still sometimes tells an anecdote about how flabbergasted I was at a friend in kindergarten who opened a set I got him as a gift, ignored the instructions, and proceeded to build some bespoke edifice with virtually no resemblance to the picture on the box.
The sheer number of combinations exposed by the pieces that can number in the thousands for some sets that can be assembled either as the designer intended, completely differently, or even mixed and matched with other pieces from other sets to me feels far more like the parent comment description of exposing the underlying features, shipping a specific vision of them being assembled in a certain way, but allowing alternative visions to be constructed. What you're describing to me is more akin to building blocks; they're larger, more uniform, and only possible to put together in the ways that the designers intended. Stacking them up requires far less effort than putting together a Lego set, and you're not going to have trouble understanding how to stack some pieces so that they're relatively stable, but you're limited by the combinations of ways you can compose uniform cubes. You can't build an arch or bridge because gravity will get in the way, and you don't have the ability to make any other shapes out of the block material.
> At some point I realized I was in the opposite camp and nothing I have seen since that has really changed my view point.
I'm in this camp as well. The additional machinery required to make library features pluggable often adds a lot of complexity to the user-visible semantics of the language. And it's often not so easy to limit that complexity to only the situations where the user is trying to extend the language.
That's not always true. Sometimes the process of making seemingly core features user-replaceable will reveal simpler, more fundamental abstractions. But this article is a good example of the opposite.
One might push that analogy past its limit, and suggest that you probably wouldn't try to build a road-legal car out of Lego :).
More seriously, I'm not expecting that most people will want to use the underlying language features, nor indeed that those who do should actually do so commonly. They're there to provide for a clean separation between control logic and business logic. And that helps us to create cleaner abstractions so we can test our control logic independently of our business logic.
Most of the time I should be using the control logic we've already written, not writing more of it.
What you are missing is the whole point of writing better languages is to write better libraries.
It is not worth it to write better languages if just the "last mile" of final program is more productive --- making new languages is extraordinarily expensive, and this sort of single-shot productivity boost just isn't worth it.
If you can make better libraries --- code reuse and abstractions at hitherto unimaginable ways and then use those libraries to write yet more libraries, you get a tower of abstractions, each yielding every more productivity, like an N stage rocket.
This sort of ever-deepening library ecosystem is scarcely imaginable to most programmers. Or, they think they can imagine it, but everything looks like a leftpad waste of time to them.
Having a separation between the "pure language" and the library is a requirement if you want to have a language that can be used for low-level components, like kernels or bare-bones software.
I don't think this is possible in a language that needs a runtime, like Go.
reply