I've never understood the appeal of these "define struct-like-object" libraries (in any language; I've never understood using the standard library's "Struct" in Ruby). My preferred solution for addressing complexity is also to decompose the codebase into small, understandable, single-purpose objects, but so few of them end up being simple value objects like Point3D. Total ordering and value equality make sense for value objects but not much else, so it really doesn't improve understandability or maintenance that much. And concerns like validation I would never want to put in a library for object construction. In web forms where there are a limited subset of rules I always want to treat the same way, sure, but objects have much more complicated relationships with their dependencies that I don't see much value in validating them with libraries.
Overall, I really don't see the appeal. It makes the already simple cases simpler (was that Point3D implementation really that bad?) and does nothing for the more complicated cases which make up the majority of object relationships.
Ignore all of the validation aspects. In python, you have tuples, (x, y, z), then you have namedtuples and then attrs/dataclasses/pydantic-style shorthand classes.
These are useful even if only due to the "I can take the three related pieces of information I have and stick them next to each other". That is, if I have some object I'm modelling and it has more than a single attribute (a user with a name and age, or an event with a timestamp and message and optional error code), I have a nice way to model them.
Then, the important thing is that these are still classes, so you can start with
@dataclass
class User:
name: str
age: int
and have that evolve over time to
@dataclass
class User:
name: str
age: int
...
permissions: PermissionSet
@property
def location():
# send off an rpc, or query the database for some complex thing.
and since it's still just a class, it'll still work. It absolutely makes modelling the more complex cases easier too.
Note that that "location" property should be a method instead of property to signal that it does something potentially complex and slow. Making it a property practically guarantees that someone will use it in a loop without much second thought, and that's how you get N+1.
Fair point! one of various @cached_property decorators might fix this, depending on the precise use case, but yeah this is an important consideration when defining your API.
well one appeal is that you dont have to write constructors, that‘s already enough of a win for me. then you get sane eq, and sane str, and already you remove 90% boilerplate
I really, genuinely don't get the appeal. I don't follow the "less code = better" ideology so maybe that's a contributor but I really don't see how this:
class Person:
def __init__(self, name, age):
self.name = name
self.age = age
is any worse than this:
@dataclass
class Person:
name: str
age: int
I'm not writing an eq method or a repr method in most cases, so it just doesn't add much for the cost.
The point is that for data-bag style classes, you end up writing a lot more boilerplate than that if you use them across a project. Validators (type or content), nullable vs not, read-only, etc.
The minimal trivial case doesn’t look much different, but if you stacked up 10 data classes with read-only fields vs. bare class implementations with private members plus properties to implement read-only, and you would start to see a bigger lift from attrs, as there would be a bunch of boring duplicated logic.
(Or not - if your usecases are all trivial then of course don’t use the library for more complex usecases. But hopefully you can see why this gets complex in some codebases, and why some would reach for a framework.)
The advantage of dataclasses is that they’re hard to mess up. They define all the methods you need to have an ergonomic idiomatic class that is essentially a tuple with some methods attached and have enough knobs to encompass basically all “normal” uses of classes.
It’s a pretty good abstraction that doesn’t feel half as magic as it is.
Given that code is for people, I've never found a certain amount of idiomatic boilerplate a problem. The desire to remove it all, or magicify it away (eg: Django) has always made me do a bit of an internal eye roll.
To start with, the non-`@dataclass` version here doesn't tell you what types `name` and `age` are (interesting that it's an int, I would have guessed float!). So right off the bat, not only have you had to type every name 3 times, you've also provided me with less information.
> I'm not writing an eq method or a repr method in most cases, so it just doesn't add much for the cost.
That's part of the appeal. With vanilla classes, `__repr__`, `__eq__`, `__hash__` et. al. are each an independent, complex choice that you have to intentionally make every time. It's a lot of cognitive overhead. If you ignore it, the class might be fit for purpose for your immediate needs, but later when debugging, inspecting logs, etc, you will frequently have to incrementally add these features to your data structures, often in a haphazard way. Quick, what are the invariants you have to verify to ensure that your `__eq__`, `__ne__`, `__gt__`, `__le__`, `__lt__`, `__ge__` and `__hash__` methods are compatible with each other? How do you verify that an object is correctly usable as a hash key? The testing burden for all of this stuff is massive if you want to do it correctly, so most libraries that try to eventually add all these methods after the fact for easier debugging and REPL usage usually end up screwing it up in a few places and having a nasty backwards compatibility mess to clean up.
With `attrs`, not only do you get this stuff "for free" in a convenient way, you also get it implemented in a way which is very consistent, which is correct by default, and which also provides an API that allows you to do things like enumerate fields on your value types, serialize them in ways that are much more reliable and predictable than e.g. Pickle, emit schemas for interoperation with other programming languages, automatically provide documentation, provide type hints for IDEs, etc.
Fundamentally attrs is far less code for far more correct and useful behavior.
I understand repr for debugging (though imo it's a deficiency of the language that custom objects don't have a repr which lists their attributes), but eq is a property of the domain itself; two objects are only equal if it makes sense in the domain logic for them to be equal, and in many cases that equality is more or less complicated than attribute equality.
> though imo it's a deficiency of the language that custom objects don't have a repr which lists their attributes
It makes perfect sense that attributes be implementation details by default, and `@dataclass` is one of the ways to say they're not.
> eq is a property of the domain itself; two objects are only equal if it makes sense in the domain logic for them to be equal, and in many cases that equality is more or less complicated than attribute equality.
dataclass is intended for data holders, for which structural equality is an excellent default,
If you need a more bespoke business objects, then you probably should not use a dataclass.
I was merely noting that dataclasses are mostly intended for data holder objects (hence data classes), and thus defaulting to structural equality makes perfect sense, even ignoring it being overridable or disableable.
This was in reply to this objection:
> eq is a property of the domain itself; two objects are only equal if it makes sense in the domain logic for them to be equal, and in many cases that equality is more or less complicated than attribute equality.
That warning has caught bugs for me when I was copy and pasting code, or writing code on "autopilot". Usually it's pretty obvious mistakes though.
Unused variable warnings are a good idea, but making them hard errors is a language design mistake. Warnings are good because they allow programmers to quickly make changes and test things, while providing a reminder to clean things up in the end (which is why the "just use tooling that removes unused variables" response misses the point--when making quick temporary changes for debugging, you want the warnings as reminders to go back and undo the change). Additionally, warnings allow for adding helpful static analyses to the compiler over time without breaking existing code like introducing new errors does. As I recall, there were some cases in which Rust 1.0 accidentally didn't flag unused variables, which was fixable post 1.0 without breaking existing code precisely because it was a warning, not an error.
Cargo doesn't enable warnings for crate dependencies, by design. In fact, it won't even emit them if those crates say #[deny(warnings)]--there's a special rustc flag called --cap-lints that it uses for this (RFC at [1]). The reason is that a lot of crates say #[deny(warnings)], and this was creating no end of backwards compatibility problems when new warnings were added.
There is an interesting thread with community consensus against the use of #[deny(warnings)] at [2]. The most important takeaway for me is that the right place to deny warnings is in CI. You don't want end users who compile your crate to be have their builds fail due to warnings, because they might be using a newer version of the compiler than you were. You don't want to fail developers' builds due to warnings while hacking on code, because of the overhead warnings-as-errors adds to the edit/compile/debug cycle. CI is the right place to deny warnings, because it prevents warnings from getting to the repository while avoiding the other downsides of warnings-as-errors.
Yes, users can misuse warnings by ignoring them in CI, but the ergonomic cost of forcing unused variables as errors is disproportionate in regard to this. This affects all users regardless of if they would deny warnings in CI, and leads to either removing/adding some code repeatedly so it compiles at all, or worse adding fake uses, defeating the purpose of the warning. I am using C++ with -Werror during development because it is the only way to keep my sanity (functions not returning a value in C++ is a warning), but it is an ergonomic disaster that I am happy to avoid when using Rust (where the right place to deny warnings is in CI).
I would agree that forcing uninitialised variables as errors is a design mistake.
I think it's reasonable for users to want a workflow that includes some mode where they can temporarily compile code that has unused variable, then check in code that does not have unused variables. The trouble is that if there's such a mode, people will just leave it on permanently. I don't have solutions, but I think it's worth trying to save people's workflows.
It happens. Hell, it even happens in Rust, i had one happen a few weeks ago that the warning lint caught. I had a rename and then added an inner scoped var of the old, previous name - but neglected to use that new var in that inner scope. Compiled fine, but was very much a bug.
Luckily though my Rust setup doesn't fail to compile with unused stuff, it just warns - and then on CI i have it reject all warnings.
I agree it's very frustrating not having a -dev flag or something less restrictive.
It is only useful in the very niche case that the incorrect variable name you're using is defined (otherwise you'd get an unknown identifier error) and the correct name you should have been using isn't used anywhere.
Would catch this:
var a, b
c = a + a // whoops, should have been a + b, compiler complains about unused b
Doesn't catch this:
var a, b
foo(a, b)
c = a + a // whoops, should have been a + b, but still compiles
All in all, unused variables being errors is an awful feature that isn't very helpful in practice, at the cost of making experimentation a pain in the arse.
There is nothing that kills my state of flow more than having to comment a piece of code that is unreferenced, because the compiler complains, while I'm trying to hack and explore some idea.
I don't program in Ruby but every time I look at it I see something different depending on who wrote the code and what their personal style is
I'd say this is a plus, at least in certain dimensions. One of the best descriptions I've heard of Rails' ActiveSupport (a collection of extensions to built-in types included in Rails) is as a dialect of Ruby, one specific for developing web applications. Other domains could and should use different dialects.
are why I love Smalltalk (and its followers, e.g. Ruby). So much expressiveness, reading like a sentence. And Smalltalk even had the ingenuity to use period as a statement terminator, so it feels even more like a descriptive sentence describing the domain concept.
Yeah, I have no idea why Rebol and Red aren't more popular. ( https://www.red-lang.org/ ) They pack such a bang for the buck it's almost embarrassing for other language/runtimes.
Rebol itself is practically dead, no new release in a decade. It was also proprietary early on and a "weird" language (interesting, and good, but non-standard in so many ways that it was very niche). That killed its potential in the late 90s and early 00s when everyone was moving to open source scripting languages that filled the same niche, but with more conventional syntaxes and free access.
That's why it isn't more popular. In some ways it was a few years too early (to hit the zeitgeist around DSLs), but it was also a few years too late for its proprietary model.
And something else that needs to be said specifically about the Red programming language. I like the full-stack concept, ease of use, cross-compiling, and the goals they are shooting for. But, the lead developers need to get it moving. Like a small fire needs to be lit under the butt, as the competition isn't standing still.
The last stable release has been stuck at 0.6.4 since 2018. I know the lead developers have recent automated builds (December 2021), but come on, such is not going to inspire faith in casuals to join the bandwagon nor keep getting mentioned by the media. Many users like to see newer stable releases, until the project achieves its stated goals. A lot is to be said for maintaining momentum and enthusiasm.
That's another aspect of the hurdles for the newer programming languages, it's harder than ever to keep people's attention and whip up excitement.
The problem is that there are so many new languages that it's hard for any of them to gain enough momentum to encourage enough programmers or businesses to learn or even look at them. Not to mention that various top 10 languages are supported by huge companies that may rather snuff out or at least throw shade on up and coming languages that threaten their interests.
The development pace of many of the new languages is relatively slow, despite having some great ideas, syntax, or features. Which means years before they have a large enough ecosystem and libraries, to even come near to mounting a challenge to the more established languages.
For instance, another new one out there that I like is VLang (https://github.com/vlang/v). Like Red, it needs to find a sweet spot that will propel it to greater usage and recognition. Partly that can be cross-compiling and cross-platform application development. Also, strong emphasis on mobile development for both Android and iOS would help, but Apple makes their part of it difficult. To stand out, it means having easy to create UIs, their own IDEs (to maximize language features), updated documentation (to help beginners), books about it, etc... Just being on Visual Studio Code, with a hundred other languages, makes it hard to get noticed.
Compare Red with established heavyweights like C#, Python, JavaScript, or even contenders like Delphi/Object Pascal. Not so easy to pull attention away from those languages and the thousands of projects using them, unless something very compelling can be shown or proven.
> Yeah, I have no idea why Rebol and Red aren't more popular
Rebol died (for all intents and purposes) because it stayed proprietary without a large enough proprietary market too long, and good enough open source languages took the niches it could have had and built out robust ecosystems that it never developed.
Red has been disadvantaged by the ecosystem consideration, and hasn't found a killer focus that gets people over that in enough mass. They tried chasing crypto for that...
Similarly, TeX bakes units into its syntax, but METAFONT (which is much more pleasant as a programming language in general) just has a production for <numeric token> <numeric variable> instead (yes, it has a very involved context-dependent grammar), though it gives up type checking because of that.
Rebol, is mostly "dead", and I don't like to so easily throw the term around. But, originally it started out as a proprietary commercial language, then when it couldn't make enough money, went open source. It has some good concepts, and passionate followers, but didn't go mainstream. Even though it went open source, in the flood of so many languages that exists now, continual development stalled. However, it did produce several offshoot descendant languages.
Something to point out, is there is a lot of old tutorials and information on Rebol that becomes more useful as references, if a person learns any of the descendant languages.
Red (https://www.red-lang.org), is a very good open source offshoot. If a person is choosing between them, I think most would recommend it. But, the problem with Red is its development can be described as sluggish. The lead developers appear overly preoccupied with other projects, as oppose to getting Red into a more useful state and delivering on the stated goals and expected features.
For example, one of its main benefits is easy cross-compiling to other OSes. However, it's stuck in 32-bit, and needs to have a 64-bit version to keep pace with the requirements of macOS, iOS, and Android.
If Red's development was to keep on pace, which is important for multi-OS usability, I do think it would be worth knowing. Quite powerful and easy to learn language, for potentially a wide variety of purposes, with a light footprint.
In reality? None. Both give me lots of trouble. Red doesn't offer a good external program launch, which when things fail, I always drop down to sh. Rebol interestingly enough offers a wee bit more stability. but its age is clearly showing. r3 being open source has a bunch of versions and forks, none of them usable. there's also ren/c which is supposed to be usable, however I cannot build it on a pi4, there is also another attempt which is called arturo, written in nim, this one works but development is only done by a single person.
I don't see any sentence nor expressiveness in that. It's the same structure as most method calls.
"Deposit 100 dollars in my account." is a sentence, and that's doable in most languages. It just depends on how high you want to go with it. Functions are ubiquitous and powerful:
deposit($(100), in(my_account))
And if we remove brackets like a lot of languages (I prefer not to)...
deposit $ 100, in my_account
Functions remove the need to tie actions to objects too. We can rely on types/interfaces.
deposit($(100), in(the_river))
We could go further, again, depends where you want to stop. Expressiveness isn't really limited by most languages.
Also, reading like a sentence is not expressiveness. Expressiveness in programming languages is about abstractions, like for example macro capabilities. Reading like a sentence is merely about fluent APIs and (sometimes) syntactic sugar.
...which illustrates one of the oddities of Smalltalk's vision of OOP pretty well. I've made deposits into accounts, or maybe I've asked banks to make a deposit into an account on my behalf, but I've never (intuitively) instructed an account to make a deposit into itself.
Do bank tellers, conceptually, think of themselves as telling accounts to add $100 to themselves? Or do they think of themselves as adding $100 to an account?
This can cut both ways, though—English-likeness can be bad for a language when it comes to the boundaries of what English expresses well (see https://daringfireball.net/2005/09/englishlikeness_monster). There's a reason we don't program in COBOL!
I want so much to like it, but, even in its heyday, the documentation was so poor—everyone seemed so convinced that its English-likeness made it obvious that they seemed to refuse to document the 'obvious' commands. And now it seems that even Apple has decided to let it languish.
But therein lies the problem. Let's forget about programming for a moment and talk human languages instead.
In english, you say "Cheese omelet". In dutch, you say "Kaas omelet". In french you say "Omelette du fromage".
Neither is better than the other; it's just how it is. A native english speaker who is learning both dutch and french may posit the feeling that dutch feels 'more natural' and has 'more obvious word order'. But I'm sure you'd agree that this is just based on happenstance; simply because english and dutch so happen to match. There's nothing intrinsically better about putting the cheese in front of the omelet instead of after it. Nevertheless, for a native english speaker, dutch will seem simpler... __right up until the moment you turn "native", and you no longer translate each word on its own back to english first__. Once you've hit that level, there's no difference. At all. You hear the words and your brain visualizes a cheese omelet instantaneously.
The same logic applies to programming langauges. "It reads like a sentence"? What are you on about? If I see this code:
account.deposit(Dollar.of(100));
I know what that means, __instantly__, in the exact same fashion someone who is entirely fluent in both french and english makes absolutely no difference whatsoever between Omelette du Fromage and Cheese Omelet. Simply because I program enough javascript, java, C#, (C-esque syntax) that __THIS__ is natural to me.
Rails in particular is egregiously in violation of this ridiculous aim to make it look like english (but, fortunately, nothing so crazy as AppleScript). Rails prides itself on being able to write `5.minutes` - they monkey patch numbers to add a minutes function.
But in programmese, of any flavour, that makes very little sense. You want to create a value of type Minutes, or in some languages, you want to create a value by using a function from the namespace Minutes, and this operation that requires a parameter (the amount of minutes). Putting the param _before_ the namespace / class / type / function name / whatever your programming language uses here - is _highly exotic_ - something very very very few programming languages do. Except Rails (I'm going by memory here - I believe the minutes function on numeric types is a monkeypatch Rails adds, it's not stock Ruby). They do it, apparently (in that they call it out in their tutorials) because "5 minutes" reads like english and leave it at that - clearly insinuating that 'reads like english' is upside.
No it isn't.
And that's why "account deposit: 100 dollars." is by no means "easier to read" simply because it reads like an english sentence.
One hundred percent agree. Please stop trying to write really verbose code that looks like some weird pseudo english. Mathematicians realized having a grammar specific for mathematics was a GOOD thing NOT a drawback. Even if you have to learn a few symbols at first.
What I want to see:
5 + 3
What I don't want to see:
Add the number five to the number three.
One of Tcl's most popular object systems is called incr Tcl. Tcl allows some Lisp-like metaprogramming (albeit effectively all stringly-typed, and with fexpr crawling horrors -- where it's really easy to confuse metalevels -- and weird scoping rules), so incr Tcl can be, and is, implemented in Tcl.
> Rails prides itself on being able to write `5.minutes` - they monkey patch numbers to add a minutes function.
That can be done in strongly, statically typed languages with either extension functions (Kotlin, Dart) or type classes (Rust, Haskell).
People do it because there are advantages beyond the subjective reason that it reads better (I can speak several human languages and in all of them you would say number-timeUnit - so it's not just English even if it's not completely universal). It's also easier to "look up" functions based on the type of the value, for example.
Don't forget F#'s units of measure! Which by the way are pretty cool, as the compiler automatically infers new units of measure for you:
> [<Measure>] type km;
> [<Measure>] type hour;
> let distance = 60.0<km>;
> let time = 0.5<hour>;
> let avgSpeed = distance / time ;;
val avgSpeed : float<km/hour> = 100.0
(And of course forbids physically illegal stuff such as adding values with different dimensions)
>Rails prides itself on being able to write `5.minutes` - they monkey patch numbers to add a minutes function.
In languages like Nim with Uniform Function Call Syntax this is actually completely natural and universal: f(a, b) and a.f(b) are completely equivalent in all cases. So if you have an ordinary `minutes` function that takes a number and returns a time/duration, you can write either 5.minutes or minutes(5) as the fancy takes you. No special monkeypatching required, it just works everywhere for every function and every type.
Exactly. This is why I dislike SQL. The relational model is great, but it was hamstrung at birth by the foolish insistence that it "read like English" so that non-programmers could write queries. Now we're stuck with a deranged syntactic mishmash.
Interestingly, a large number of the people actively developing Pharo seem to be native speakers of French and Spanish. People from Inria are the lead devs, and it’s got a lot of momentum with South American universities too. I guess they see value in it as a pedagogical tool, at least.
I definitely understand the argument that different human languages have different semantic patterns. The only non-English language I know (reading, not speaking) is classical Latin, which likes to have nouns and verbs at the opposite ends of phrases; after reading a lot, it just starts to "feel" right.
account.deposit(Dollar.of(100))
I also know what this means, but somehow it just doesn't "feel" right. Maybe it's because all the other words are in English, so I want 100.dollars to follow English word order? But account.deposit(...) feels much better to me than deposit(into: account, ...), which is the "wrong" word order. I'm perfectly willing to accept that I just have a sense of what "feels" good to me and it's not necessarily logical or reasoned, just a general aesthetic of nouns playing in a world of data.
As an aside, I always find it funny when people use "monkey patching" in an attempt to decry giving language users expressive power. I love monkeys! I love monkey patching!
I'm still not clear on whether or not you are a native English speaker, but the point being made is that the expressiveness of a programming language shouldn't be tied to the semantics of a spoken language irrespective of whether the majority of the keywords are expressed in some given language (in this case English using Latin alphabet).
The derogatory and inaccurate use of “monkey patch” to describe adding methods to an open class identifies you as a Python advocate, so grains of salt applied, but:
I can’t speak for this particular Rails usage, but one of the most powerful things you can do with Ruby is build domain-specific languages without modifying the language itself. *This is a core feature of Ruby.*
The criticism you’re making is one of the Rails DSL — there’s nothing in Ruby preventing you from instead writing account.desposit(Dollar.of(100)).
Put another way: This is like people criticizing RISC-V for not having multiplication in the core ISA — they don’t seem to grasp that RISC-V is an ISA construction set. Ruby is a language construction set.
I first heard the term in reference to how mootools messed with prototypes of JavaScript built-ins; at that time, I had no experience with either python or ruby.
Whatever context the term may have originated with, it far outgrew those origins. I definitely don't instinctively think of python programmers snubbing ruby when I hear the term.
I don't enjoy writing Rust but its perspective on this is at least internally consistent. What are the alternatives for managing cyclic, linked data structures in other languages?
In languages with manual memory management you can do it, but it's incredibly easy to mess up. You either have to maintain the entire mental model of who owns what data and what data has been initialized in your head or write it down and risk that becoming out of date.
In languages with a GC, the implementors have assumed this complexity for you. Depending on what style of GC your language uses the exact strategy will be different, but on a GC is a comparatively complex piece of code, especially one which handles reference cycles well like we're talking about here.
Either way, the complexity is there somewhere. If you manage memory yourself, you deal with the complexity yourself in a hard-to-debug way. If you can accept the tradeoffs of a GC, then the complexity is abstracted behind the GC. Because Rust is designed for use-cases where you can't accept the tradeoffs of a GC, it has to surface that inherent complexity somewhere. It decides to surface it with ownership semantics, which at least make the rules you're following in manual languages explicit and (largely) unbreakable.
Yeah it's complex, but the underlying problem is complex. I can use GC languages, so I do, but you'd be no better off in C.
The challenge is that rust is not explicit about what it can't do. There are GCs for rust that don't really work, and the borrow checker disallows reference cycles.
The rust docs try to pretend this is never a problem and that people writing such structures are doing it wrong which is imho a problem.
You might be confusing mean goal and end goal. Nobody needs to write a linked list, except when interviewing.
What you need (a feature, a performance improvement, etc.) is usually better served with other structures, especially in languages which don't heavily favor heap allocations (in a GC based language you've already paid the allocation cost when creating the object anyway).
I think there's now some evidence that Rust prevents developers neither from making efficient applications nor from efficiently making them.
This is certainly valid for some number of uses, however there isn't a good way of mapping all reasonable uses of linked structures to unlinked structures except via associative array.
There are many linked structures such as an Linked hashmap or LRU cache that can’t be efficiently represented in safe rust. There are engineers who need to code and use such structures outside of an interview context, these engineers must use unsafe to code valid programs.
I honestly wouldn’t mind if the docs were simply upfront on the need to use unsafe and the insufficiency of any alternative for these situations. Long ago it was thought that users would simply use something along the lines of GC<Type> to handle these situations, but none of the rust GC projects materialized.
I want to write a function to increment the value of every Node. This is trivial with recursion, but that risks blowing the stack. So I try an iterative version using an explicit stack, but the borrow checker doesn't understand explicit stacks, only the C stack, so it rejects my code.
There's no inherent underlying complexity to transforming a recursive function to an iterative one, yet it is legitimately hard in Rust.
While I think the new site certainly looks appealing and includes a lot more helpful information than before, there's something about the old playful, hand-drawn design that felt more like it embodied the ethos of Rails.
The last point would be amazing, imo. It's unfortunate that, because of history, commands accept and parse arbitrary strings as input instead of formally specifying it like a function signature + docblock. If I could rewrite the universe, commands would use a central API for specifying the names, data types, descriptions, etc. for all the input they take. In our timeline, maybe some file format could be standardized that describes the particular inputs and options a command takes, and e.g. shells would hook into it. Kind of like header files, but distributed either in a community repository or by each of the tools themselves.
> ... commands accept and parse arbitrary strings as input instead of formally specifying it like a function signature + docblock. If I could rewrite the universe, commands would use a central API for specifying the names, data types, descriptions, etc. for all the input they take
Which is exactly what powershell does: Commands in powershell (called "cmdlets") are like functions with type hints. A command does not parse the input strings itself, rather it exposes this type information to the shell. The shell is the one responsible for discovering the parameters (and types) and parsing the command line parameters and coercing types before passing them (strongly typed) to the actual command.
This means that the information about parameter names and types is readily available for doc-generation, auto tab-completion and language servers which allow syntax highlighting, completion etc to work even inside editors such as vscode or emacs.
The point is that to specify a cmdket you must declare parameters, in much the same way that for a function in a programming language to accept parameters it must declare those as formal parameters.
And with PowerShell Crescendo, same experience can be provided for native commands, although I think it is a bit too much expecting everyone to create crescendo configuration files.
Yeah, the inertia is the real killer here. I'd love to see a fully embedded solution (particularly given that ecosystems like Go and Rust discourage sidecar files like manpages), but thus far the only one I've really seen is individual tools supporting a `--completions=SHELL` style argument that spits out an `eval`-able script for the requested shell.
The real dream would be some kind of standard way for a program to indicate that it's opting into more intelligent CLI handling without any execution. I've thought about trying to hack something like that together with a custom ELF section, but that wouldn't fly for programs written in scripting languages.
> In our timeline, maybe some file format could be standardized that describes the particular inputs and options a command takes, and e.g. shells would hook into it. Kind of like header files, but distributed either in a community repository or by each of the tools themselves.
This sort of sounds like what we're working on at Fig. We've defined a declarative standard for specifying the inputs to a CLI tool and have a community repo with all supported tools: https://github.com/withfig/autocomplete
I've been thinking of throwing FreeBSD on an old Thinkpad and trying to use it normally. Right now I run Debian, with most of my work happening in Firefox or the shell.
Is there anything you didn't even think about that ended up being a problem, or noticeably worse? Or the opposite, something you thought would be an issue but wasn't?
Most gmake works well but sometimes you have to change the include and lib paths. FreeBSD is just a bit different in that respect bit it's a good thing IMO. It's very consistent
WiFi is currently limited to 802.11n on all chipsets. Work to support 802.11ac (and more wireless cards) is ongoing and looks to me like it could be ready in 2022. Also, options for videoconferencing programs (Zoom, Skype) are poor, you're effectively limited to browser-based versions, which don't perform well with an iGPU at least.
Honestly - I expected it to be harder than it was. I read the entire handbook cover to cover before doing an install. I think that was the trick. It’s just been smooth sailing.
I keep an R620 running Linux that I can ssh into if I need Docker - but I use it a lot less than I expected.