This past month, I have spent a decent amount of hours (7+) trying to setup nix on my mac with nix-darwin, and failed.
Most tutorial out there encourage you to download someone else's configuration to get going. I don't want to do that. I want to understand at its core how this thing works.
I've read the official nix language documentation, watched YouTube tutorials, read 3rd party tutorials, and still couldn't get going with a simple configuration that would install a few packages.
The nix language is also really unpalatable to me. But I could deal with that if the examples out there showed a consistent way of doing things – that's not the case. It seems one same thing can be done many different ways – but I want to know and do it the right way. I would generally turn myself to the official best practices documentation, except nix' is very short and doesn't help much.
I really want to use nix. There's no question about its advantages. But nix just won't let me (or maybe I'm too old to learn new things).
That being said, I'll probably give it another try this month...
> The nix language is also really unpalatable to me.
yeah, I wish I could give you some "it gets better" good news, but...
I've used NixOS as my daily driver for ~10 years, including the laptop I'm typing this on.
I love NixOS-the-OS, I love nixpkgs-the-ecosystem. but I still hate Nix-the-language.
it's like Perl and Haskell had a drunken hookup that produced a child. and then abandoned that child in the forest where it was raised by wolves and didn't have contact with another human until it was fully grown.
(to answer the inevitable replies, yes I understand functional programming in general, and yes I am aware that Guix exists)
for simple NixOS administration, you can get pretty far with treating configuration.nix as "just" a config file, rather than a program written in a Turing-complete functional language.
writing your own modules or flakes, or re-using flakes published by other people, is strictly optional. make friends with The Big Options Page [0] - anything you find there can be dropped into your configuration.nix without really needing to understand Nix-the-language.
> The nix language is also really unpalatable to me.
It may not really help the case, but I firmly believe that it is not the language, but the ecosystem, and is more of a fundamental issue. But maybe putting the blame elsewhere could help accept the situation.
So anyways, the language is pretty much a lazily evaluated JSON. But even if it were something else (insert your favourite language), the problem ultimately is that packaging software is complex especially in a non-standard way, with endless edge cases, requires whole libraries and conventions and this is simply not a well-trodden path. Most programs simply hard-code "traditional" Linux file system conventions and those have to be patched in some way.
So the hard thing is not "is this really a function application here", when writing new Nix code the hard thing is simply knowing that for python there already exist this abstraction in nixpkgs, but to use it you need this folder structure and this build tool, etc. Especially when there are multiple abstractions for the same thing because it's an absolutely huge repository with countless packages.
But the benefits absolutely make up for it big time - there is simply no going back from Nix imo. I would honestly feel constantly "dirty" with any other traditional package manager, it's like file "versioning" before version control.
(PS: just grep for use cases of a function you are looking for. Also, find a "blueprint" package and start from there, e.g. another program written in python with a few native deps)
I've used Nix for at least seven years, and I firmly believe that the language is a large part of the problem. Yes, the Nix language is "just another lazily-evaluated pure FP language in the ML tradition" and "it's like a lazily-evaluated JSON", but it has several large footguns. The biggest one is that spaces are use to separate elements in list literals as well as for function application. The second is the lack of a usable type system, in the sense that the programmer cannot assert the types of values in a useful way. Instead, you have to rely on comments and convention to know what a function's arguments are.
These two design warts also interact with each other really badly: If you try to put a function application into a list and forget to enclose it in parentheses, you instead insert the function as one element in the list and its arguments as successive elements. The usual result is "expected an X but got a function" error in some completely unrelated part of the code.
It is the language. The module system is both semantically indispensable and a second class citizen. It's another language, implemented on top of Nix. Once you have a userland "if" reimplemented in your language you know you're in a bad place. (`mkIf`)
Maybe lazy evaluated attrsets can help make a dent, but still the lack of static types for module code is beyond painful. It's hostile.
I believe Nix is worth it in spite of this, and I'll advise anyone to learn it, it truly is the way forward, but by god do I hope it's not the last step on this journey. Please, Lord, please don't let nixlang be the final iteration XD
I read the same complaint about the language from people I follow who love and actively promote Nix. So it's not just you.
Sorry for adding to your frustration of "just follow what someone else did" but I recently went all-in on managing my Mac (programs, dotfiles, configs, etc) via Nix* when setting up a new machine recently. https://github.com/landaire/config/tree/main/modules
*Nix + homebrew, mostly because Homebrew packages more macOS applications.
I've been on a similar journey this past month, although it sounds like mine went a little more successfully. I've managed to get a repo setup which contains a nix flake with nix-darwin configuration, and it also calls into some home-manager modules which I also use on a linux device as well. I do agree, the nix language isn't particularly to my taste either.
I know you're hoping to go from first principals but I'm happy to share the repo if you want (email in my profile).
Aside from that, what issues did you run into? I'm keen to know if I've just not gone deep enough and will soon hit something.
I had the same reaction my first year. I found the NixOS documentation to be very poor and the lack of a single set of best practices (e.g., imperative, declarative, home config, flakes) to be frustrating.
I switched a couple devices to Guix and was at first encouraged by their much better docs, but the lack of features and battle testing has been a problem with longer use.
I've mostly been happy to go back to NixOS thanks to LLMs. Even a year ago, AI was very good at updating Nix configs and fixing any errors. Ideally Nix would have better docs and a more intuitive unified config system, but LLMs have made it usable and the best solution for now.
I struggled with this too and it took me a while to accept that there is no right way. There are many ways, and there is a lot of legacy style out there, but ultimately you have to do what works for your own productivity/sanity.
you should look into learning how to write modules. nix-darwin at its core is a somewhat underbaked port of nixos to mac OS with the same very useful module system. otherwise look into just getting home-manager working and working your way up.
At [company x] someone wrote a starter guide that tells developers to create a "production", "staging" and "dev" branch for any new repo. We have tons of repositories that follow this pattern.
For many of them, each branch has taken of its own life and could be considered its own completely different codebase. It's a nightmare to manage and it confuses developers on a regular basis.
Don't do this.
If you want to deploy different versions of your software in different environments, use SEMVER and git tags. Don't create 1 branch per environment...
I have since edited that starter guide and crossed out this recommendation.
It works fine if you review PRs and only allow STG->PRD promotions. It breaks down when people start making separate builds for each env.
Treat env as config as you'll just have to manage a config folder in that repo.
I concur, it works fine as long as devs follow the procedure. I also prefer to enforce linear history as well so that git bisect works properly; but then this requires devs to understand how to use --ff-only and also if you're using github to use a github action to fast forward as github doesn't natively support fast forward (one of github's many sins).
But then I also find I need to train devs on how git actually works and how to use git properly because I find that only about 10% of devs actually understand git. But it works out best for everyone once all the devs understand git, so generally most devs appreciate it when someone is willing to teach them the ins and outs (but not all devs appreciate it before they learn it properly though).
Sorry but you are just using source control very wrong if you keep 2 parallel environments in the exact same code base but different branches. The build itself should know whether to build for one environment or another!
They are the same only sometimes; devs work on code on feature / fix / whatever branch, then when they've finished dev testing you do a code review and then it gets fast forwarded onto the dev branch, then when it suits for staging (non dev team stakeholder testing / infra testing) it gets fast forwarded to staging, then when it passes staging testing (if necessary), then it get ff onto prod and deployed. so dev will sometimes point to the same commit as staging but sometimes not, and staging will point to the same commit as prod but sometimes not. It's a funnel, a conveyor belt if you will.
Mobile apps release process will disagree with you. there’s a gap of around 4 days between what you consider as a release and what can be on prod. If you got rebutted by review, you need to edit the code. If you want to rollback, you need to edit the code. You can only be linear if you control releases.
> For many of them, each branch has taken of its own life and could be considered its own completely different codebase.
Seems you have bigger process issues to tackle. There's nothing inherently wrong with having per-env branches (if one thing, it's made harder by git being so terrible at branching in the general/long lived case, but the VCS cannot alone be blamed for developers consistently pushing to inadequate branches).
> There's nothing inherently wrong with having per-env branches
There is when you stop thinking in terms of dev, staging and prod, and you realize that you might have thousands of different environments, all named differently.
Do you create a branch for each one of them?
Using the environment name as branch name is coupling your repository with the external infrastructure that's running your code. If that infrastructure changes, you need to change your repository. That in itself is a cue it's a bad idea to use branches this way.
Another issue with this pattern is that you can't know what's deployed at any given time in prod. Deploying the "production" branch might yield a different result 10 minutes from now, than it did 25 minutes ago. (add to the mix caching issues, and you have a great recipe for confusing and hard to debug issues)
If you use tags, which literally are meant for that, combined with semver (though not necessarily a requirement, but a strong recommendation), you decouple your code and the external environment.
You can now point your "dev" environment to "main", point staging to ">= v1.25.0" and "prod" to "v1.25.0", "dev-alice" to "v2.0.0", "dev-john" to "deadb33f".
When you deploy "v1.25.0" in prod, you _know_ it will deploy v1.25.0 and not commit deadb33f that so happened to have been merged to the "production" branch 30 seconds ago.
Before git abused the terminology, a branch used to refer to a long-lived/persistent commit lineage, most often implemented as a commit-level flag/attribute,
OTOH, git branches are pointers to one single commit (with the git UI tentatively converting this information sometimes into "that commit, specifically" or sometimes as "all ancestors commits leading to that commit", with more or less success and consistency).
Where it matters (besides fostering good/consistent UX) is when you merge several (topological) branches together: git won't be able to tell if you just merged A into B or B into A. Although the content is identical at code-level, the semantic/intent of the merge is lost. Similarly, once the head has progressed so much ahead and your history is riddled with merges, you can't tell from the DAG where the individual features/PR/series start and end. This makes bisecting very hard: while hunting down a regression, you would rather avoid checking-out mid-series commits that might break the build, and instead stick to the series boundaries. You can't do that natively with git. That also makes maintaining concurrent versions unnecessarily difficult, and many projects are struggling with that: have you seen for instance Django¹ prefixing each and every commit with the (long-lived) branch name? That's what you get with git while most other VCSes (like Mercurial, my preference) got right from the start.
Branch is semantic. The true unit is commit and the tree is applying a set of commits. Branching is just selecting a set of commits for a tree. There’s no wrong or right branch, there is just the matter of generating the wrong patch
Branches are mutable and regularly point to a new commit. Branching is selecting an active line of development, a set of commits that change over time.
That's why git also offer tags. Tags are immutable.
There are multiple valid branching strategies. Your recommended strategy works well[0] with evergreen deployments, but would fail hard if you intend to support multiple release versions of an app, which happens often in the embedded world with multiple hardware targets, or self-hosted, large enterprise apps that require qualification sign-offs.
0. Semver uas many issues that I won't repeat here, mostly stemming from projecting a graph of changes onto a single-dimension.
I always thought multiple hardware targets are solved by build flags. And keep the one branch. E.g. in Go you can include/exclude a file based on "build tags":
> but would fail hard if you intend to support multiple release versions of an app, which happens often in the embedded world with multiple hardware targets, or self-hosted, large enterprise apps that require qualification sign-offs.
I don't have experience in this world, indeed.
But isn't "multiple release versions of an app" just "one application, with multiple different configurations"? The application code is the same (same version), the configuration (which is external to the application) is different.
Your build system takes your application code and the configuration as input, and outputs artifacts for that specific combination of inputs.
> But isn't "multiple release versions of an app" just "one application, with multiple different configurations"?
That would be nice (and evergreen), but that's not always the case. It's common to have different versions of the app released simultaneously, with different features and bugfixes shipped.
Think of Microsoft simultaneously supporting Windows 10 and 11, while still releasing patches for XP: they are all individual OSes that share some common code, but can't be detangled at build times[1]
The customer will be reluctant to upgrade major versions due to licensing costs and risk if breakage (your code, or their integrations), but still expect bugfixes (and only bugfixes) on their deployed versions, which you're contracted to provide. Using the evergreen approach.
I'm not convinced using build flags to manage which code is shipped is superior to release branches, I fall on the side of release branches because using bisect is invaluable.
1. I suppose as long as the build system is turing complete, one could hypothetically build Windows XP, 7, 8, 10 and 11 off the same codebase using flags. I would not envy that person.
"At company x, they had a kitchen and a couple meeting rooms. Devs started using the rooms for cooking, and the kitchen for team standups."
Tools are just there, it's people who misuse them. If devs at company x are incapable of understanding that you shouldn't be cooking an omelette in the meeting room, to be honest that's on the dev, not on the separation of concerns that was put there for them.
Probably what's missing there is training to set the expectations, policing on their actions, and a soft reprimand when the typical first time mistake is done. But if the metaphorical rooms have no indicators, no name tags, and no obvious usage guidelines, because no one bothered to set them up, then yeah expect board meetings to end up happening in the kitchen.
Whenever I run into such a problem I use google (or the web search of your choice). You may not have expected it (and you won't always be this fortunate), but this immediately makes the identity, nature, and meaning of grug evident.
> "Who cares if the words are objectively untrue? We have plausible deniability now that we said them!"
But they are not "objectively untrue". You can argue all day long that you don't believe the author are being truthful, it doesn't make it true.
edit: that being said, in juxtaposition with a copyrighted Marvel image, I could see it being used in court against the author to prove they were all along catering to piracy.
I always find it funny when people with no legal education jump in and defend something that clearly requires legal education. At least with the second edit you made it clear. Thank you.
I’m sorry but it is objectively untrue that this software does not “facilitate copyright infringement, illegal streaming, or piracy in any form”. What is the purpose of this project if it is not to “facilitate” watching torrented material?
Once again, the existence of legal use cases does not invalidate the existence of illegal use cases. Do you genuinely believe the primary use case of this software by a majority of its users will be to download this type of legal content?
i’m not judging anyone that’s interested in the tech here. it’s pretty neat.
but do you think we were all born yesterday? Are you suggesting _most_ people using a bittorrent client are downloading public domain movies like sherlock holmes shorts? Or linux ISO’s for fun?
You are technically correct. But it doesn’t take a genius to understand that the disclaimer here is a total joke.
This is more plausible deniability talk. No one has suggested that all torrent use is illegal. But this software absolutely “facilitates” illegal use cases. A gun can be used legally, it would still be ludicrous to say that guns never “facilitate” murder.
I think it's fair to say that the software itself could/does facilitate illegal uses cases. But with that line of argumentation, then all software facilitates illegal use-cases just by existing.
The statement "We do not endorse, promote, or facilitate copyright infringement, illegal streaming, or piracy in any form", might be poorly written with regards to the fact that just by existing this torrent streaming program _does_ facilitate piracy, but I don't think this was your original argument.
I’ll be honest, I really don’t know what argument you think I’m making or that you yourself are making.
The truth here is that this software will overwhelmingly be used in an illegal manner. The creators knew that when they wrote that disclaimer and we all know that reading the disclaimer. Yet the disclaimer is still placed there like it has some reason for existing beyond allowing everyone to pretend something that is happening isn’t happening. Your comments here seem to just be continuing that charade.
I’m not even condemning this software or illegally pirating movies and TV shows. I’m just remarking on the silliness of the disclaimer.
Same argument you're making would be that gun manufacturers know that their product will be used to kill lots of people, and any disclaimer on the package to not murder is silly. Would you make that argument with a straight face or change your argument as a result?
Or does it make sense to put a disclaimer on there, not just from a legal perspective, but to actively discourage those users who haven't made up their mind already? While people absolutely can use their software for pirating content—which is in open debate about the ethics—I've known very few individuals who torrent to actually profit from others material, but I know of plenty anti-piracy advocates who use stolen content for profit.
I've also known bucketloads of people that have paid $50+ for a movie in the theater or $10+ for a rental at home, only realize how badly they were duped by the industry to give money for something that was practically garbage, which they ended up not watching anyway yet the purchase was nonrefundable, which unfortunately happens several times because of all the fake interest in something actually being advertising, which appeals to their desire to fit in. It is often very exploitative.
I've also known a descent amount of people that discovered content they found joy in by torrenting, maybe at the time being depressed... struggling to get out of bed or find inspiration, and as a result improved their condition to become pretty big supporters of those who made that content later on, which they would then gladly pay for thereafter.
Seriously, any actual good artist I've known usually would be the first to encourage someone to pirate their content because they understand that the people that like it will support them, and the people that don't... they have no desire to exploit them.
Like you can claim people shouldn't shoot up heroine, while still giving them clean needles if they're still going to do it.
> Same argument you're making would be that gun manufacturers know that their product will be used to kill lots of people
Not a great example because very few guns will be used to kill people whereas an overwhelming majority of the users of this software will use it to view pirated material.
While most people may not see breaking the speed limit as the primary purpose of their car, the way cars are designed, especially marketed and used in everyday life normalizes and even encourages exceeding posted speeds. This makes speeding not an edge case, but a central, majority use case in practice.
Ok, that's not actually what I believe, I don't even know if you could make this argument. This is just for the arguments sake, sorry.
TL;DR: I am nitpicking on the use of "objectively untrue" and the implication this disclaimer serves no purpose.
> The creators knew that when they wrote that disclaimer and we all know that reading the disclaimer.
This is the idea I'm pushing back against.
Yes, you are very likely correct in your assessment that the creators know that their software will be used illegally.
No, you are incorrect, in saying this is 1- "objectively untrue" and 2- implying the statement might _not_ have some protective qualities.
To take a purposefully exaggerated analogy: you can believe all day long someone committed murder, it still doesn't make it true. You can argue all day long the authors aren't being truthful, it still doesn't make it true.
> Yet the disclaimer is still placed there like it has some reason for existing beyond allowing everyone to pretend something that is happening isn’t happening
I'd agree with this, and, add that, at the same time, (assuming the USA here) it's probably placed there for legal reasons (whether it factually matters legally or not is a question for an actual lawyer, which, objectively, I am not).
> I’m just remarking on the silliness of the disclaimer.
It feels a bit silly, yes, and at the same time... needed?
You're shifting what I used “objectively untrue” to describe. Here's what I originally said, “the words are objectively untrue”. I was not describing the thought process of the creators because that is obviously unknowable to us. I was instead describing the accuracy of “the words” claiming that the software does not facilitate copyright infringement. That claim is “objectively untrue”. The software obviously does facilitate this, which you seemingly already agreed to being true. The authors' thoughts on the matter don't impact the objective truth.
Also, I don't know what compelled you to speculate on the legal value of the disclaimer while also admitting you have no actual insight into that issue. That feels like posting just to post. You're not even baselessly speculating that I'm wrong, you're baselessly speculating that I might be wrong.
It's also interesting to see that the author themselves are clearly fighting against their own instinct to use uppercase: the first 2 items in the "here's what happens in that video:" list use uppercase.
But why are you doing it in the first place? This is a genuine question, what do you gain by actively fighting against proper writing rules which aid in readability and comprehension? What’s the rationale for making it harder for users to follow your post?
I know I’m far from alone in having skipped your post entirely upon opening. Nothing personal, but I have yet to find a single post by anyone written in this style where the content was worth the effort of parsing non-existing capitalisation.
You go through the trouble of adding aids like syntax highlighting, lists, coloured titles, and even differentiated notes and timestamps. Presumably those are there to help the reader. But then you throw away a lot of readability by making everything lowercase.
> There was a loud explosion earlier this morning, which may or may not be related, but there aren't any power outages.
Loud explosions happen all the time in SF. Particularly in the Lower Nob Hill / Tenderloin area.
I lived an Lower Nob Hill for many years and heard countless explosions, most often in the middle of the night (2am-4am) or early hours (6am). Often times these explosions M80s-M1000s being dropped and detonated.
Someone was arrested back in 2019 and the explosions reduced dramatically.
Dark background, too many colors, inconsistent spacing, inconsistent font-size and/or family, some links appear fully pink with pink underline, some links aren't pink and only have the underline, inline <code> is blue, but large code blocks are the same color as regular text – on black background, etc.
Most tutorial out there encourage you to download someone else's configuration to get going. I don't want to do that. I want to understand at its core how this thing works.
I've read the official nix language documentation, watched YouTube tutorials, read 3rd party tutorials, and still couldn't get going with a simple configuration that would install a few packages.
The nix language is also really unpalatable to me. But I could deal with that if the examples out there showed a consistent way of doing things – that's not the case. It seems one same thing can be done many different ways – but I want to know and do it the right way. I would generally turn myself to the official best practices documentation, except nix' is very short and doesn't help much.
I really want to use nix. There's no question about its advantages. But nix just won't let me (or maybe I'm too old to learn new things).
That being said, I'll probably give it another try this month...