Hacker Newsnew | past | comments | ask | show | jobs | submit | lubutu's commentslogin

I initially read "the order of files in /etc/ssh/sshd_config.d/" to mean the order of files in the underlying directory inode, i.e. as returned by `ls -f` — and thought, "oh god"... But the lexicographical order, that's not too surprising.


I mean, no. If you work on a codebase that's been going for more than a few years, the author likely doesn't even work there anymore. The commit is the important thing.


Frankly the commit message is usually the important thing. I care about why a change happened. Give me a Jira ticket, or a line of reasoning, or some documentation. I need to know this far more often than I care who literally typed the code in the computer.


You also shouldn't assume the commit author is the same person who literally typed the code. Git is a version control system, not an audit trail.


C23 has constexpr, albeit only for objects, not functions. The code also uses namespaces, as in `std::aligned_alloc(P, T)`.


You have to be 1/3 down the page to see any of this in the code examples. By that point, you would already have seen a number of examples that are valid C code and it is reasonable to expect the rest to be. At least, that was my experience.


Meanwhile, Kerberos (also Project Athena) has been at version 5 since September 1993.


> NFS still sucks (IMO it sucks more than it used to)

Any chance you could elaborate?


Several of the criticisms the book lists are still true today. File locking is unreliable, deletions are weird, security is either garbage (in that you set it up in a way where there's very little security) or trash (in that you have to set up Kerberos infrastructure to make it work, and no one wants to have to do that).

Perhaps I was a bit hyperbolic about it sucking more nowadays. At least you can use TCP with it and not UDP, and you can configure it so you can actually interrupt file operations when the server unexpectedly goes away and doesn't come back, instead of having to reboot your machine to clear things out. But most of what the book says is still the NFS status quo today, 30 years later.


Everything you listed was fixed in NFSv4. Don't use the ancient versions of NFS.


We're not there with authentication yet (although I've no problem with Kerberos myself).


How are we not there? The only real issue I know is allegedly requiring host keys for gssd (e.g. "joining the domain"), but rpc.gssd(8) documents "anyname" principals.


The only per-user authentication option is Kerberos. Username/password based authentication is not possible.


That seems like a feature; mounting SMB is done on a local system on the basis of password, and it's horrible. (I assume you could, in principle, use some other GSSAPI mechanism.)


There has been recent work on RPC-with-TLS (RFC 9289), xprtsec=mtls.


AIUI this is still not user level authentication. It rather secures the communication between hosts, but you still have to choose between sec=sys ("trust me bro") or sec=krb5* at the upper layer.


easist way nowadays to get secure NFS is to just set up a wireguard tunnel


No because you still have to trust the client.

With Kerberos a hacked client where user 1 has authenticated can't impersonate user 2 unless that user has also authenticated on the client.

With sec=sys the client is simply trusted without any per-user authentication.


in most cases you can just use more fine-grained exports. e.g. export /home/user1 to 10.0.0.1 and /home/user2 to 10.0.0.2 instead of /home to 10.0.0.0/24 etc.


The distinction I suppose is that what you really mean is "the difference in [necessary] skill level between a burger flipper and an accountant".


Not skill - that's an internal metric not an external one. The difference is between what people will pay for that skill.


I suppose such feedback could be used for reaching a fixpoint. Suppose you have a build system that reads targets to be built from stdin and outputs to stdout targets that are dependent on that target and must now be rebuilt. With an ouroboros, the build system will continue to run, even if the dependency graph is dynamically cyclical, until the fixpoint is reached and the build terminates.


I mean, learning Haskell has made me a better programmer even if I've never used it at work.

Perl I have used. I wouldn't say the same about that...


Perl is the canonical example of "less is sometimes better".

I love coding in Perl. But it seems that everyone uses a disjoined subset of it, which makes collaboration awkward.

And, if C let you shoot yourself in the foot, C++ give you a shotgun to do so.. Perl gives you a timed nuclear device.


Perl (without dependencies) works awesomely well as a replacement for bash in scripts, in my experience. Unlike Python, chances that it will break the next month (or the next decade) are virtually nil.


Python without dependencies will also work everywhere basically forever. Hell, most Python 2 is valid Python 3, but it's been over a decade now - Python 3 is the default system Python in most everything.


“Most” is a highly load bearing item here.

You can’t go back and rewrite the scripts that don’t work so they do.


It does make sense — the Unix debugger has always been *db, with no bridges in sight.

https://en.wikipedia.org/wiki/Advanced_Debugger


You can do...

  <access.log head -n 500 | grep ...
... though that's less familiar to many, I'm sure.


Neat, I didn't realize you could order it like that.

I'm realizing now another (and potentially stronger influence) is just years of muscle memory starting pipelines with cat.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: