Hacker Newsnew | past | comments | ask | show | jobs | submit | smilliken's commentslogin

Practically speaking, it's impossible to roll 6 one hundred times in a row on fair dice. Not technically impossible, but we each get to calibrate our skepticism based on how far out the probabilities are.

In this case we can be sure the dice aren't fair because there's significant motivation for them not to be, or at least it's easy to imagine a manufacturing defect in the dice.


This is a 1 in 50 chance we are dismissing as practically impossible though.

You can have this today or 15+ years ago using the excellent gevent library for Python. Python 3 should have just endorsed gevent as the blessed solution instead of adding function coloring and new syntax, but you can blissfully ignore all of that if you use gevent.


The best kind of documentation is the kind you can trust is accurate. Type defs wouldn't be close to as useful if you didn't really trust them. Similarly, doctests are some of the most useful documentation because you can be sure they are accurate.


The best docs are the ones you can trust are accurate. The second best docs are ones that you can programmatically validate. The worst docs are the ones that can’t be validated without lots of specialized effort.

Python’s type hints are in the second category.


I’d almost switch the order here! In a world with agentic coding agents that can constantly check for type errors from the language server powering the errors/warnings in your IDE, and reconcile them against prose in docstrings… types you can programmatically validate are incredibly valuable.


Do you have an example of the first?


When I wrote that, I was thinking about typed, compiled languages' documentation generated by the compiler at build time. Assuming that version drift ("D'oh, I was reading the docs for v1.2.3 but running v4.5.6") is user error and not a docs-trustworthiness issue, that'd qualify.

But now that I'm coming back to it, I think that this might be a larger category than I first envisioned, including projects whose build/release processes very reliably include the generation+validation+publication of updated docs. That doesn't imply a specific language or release automation, just a strong track record of doc-accuracy linked to releases.

In other words, if a user can validate/regenerate the docs for a project, that gets it 9/10 points. The remaining point is the squishier "the first party docs are always available and well-validated for accuracy" stuff.


Another example of extremely far towards the "accurate and trustworthy" end of the spectrum: asking a running webservice for the e.g. Swagger/OpenAPI schema that it is currently using to serve requests. If you can trust that those docs are produced (on request or cached at deployment time) by the same backend application instances serving other requests, you'd have pretty high assurance.

Nobody does that, though. Instead they all auto-publish their OpenAPI schemas through rickety-ass, fail-soft build systems to flaky, unmonitored CDNs. Then they get mad at users who tell them when their API docs don't match their running APIs.


Languages with strong static type systems


Is there a mainstream language where you can’t arbitrarily cast a variable to any other type?


https://histre.com does full text search on browser history


The best way is to open a capsule for each batch you receive to test it by taste, then store in the fridge.


> Second: there is no CEO in tech taking a smaller salary than their employees.

That's not just false but very often false.


It's the exceptional codebase that's nice to work with when it gets large and has many contributors. Most won't succeed no matter the language. Language is a factor, but I believe a more important factor is caring a lot.

I'm working on a python codebase for 15 years in a row that's nearing 1 million lines of code. Each year with it is better than the last, to the extent that it's painful to write code in a fresh project without all the libraries and dev tools.

Your experience with Python is valid and I've heard it echoed enough times, and I'd believe it in any language, but my experience encourages me to recommend it. The advice I'd give is to care a lot, review code, and keep investing in improvements and dev tools. Git pre commit hooks (just on changed modules) with ruff, pylint, pyright, isort, unit test execution help a lot for keeping quality up and saving time in code review.


They aren't talking about C and its descendants in particular, but more generally. For example in Haskell and Scheme there is only an if function and no if statement. And you're welcome to create an if function in any language you like and use it instead of the native syntax. I like to use an if function in PostgreSQL because it's less cumbersome than a case expression.

So in the abstract, if is a ternary function. I think the original comment was reflecting on how "if (true) ... " looks like a function call of one argument but that's obviously wrong.


this is not quite right. haskell and scheme have if expressions, not if statements. that's not the same as if being a function. if is not, and cannot be, a function in scheme, as it does not have scheme function semantics. specifically, it is not strict, as it does not evaluate all its subexpressions before executing. since haskell is non-strict, if can be implemented as a function, and iirc it is


> since haskell is non-strict, if can be implemented as a function, and iirc it is

"If" can be implemented as a function in Haskell, but it's not a function. You can't pass it as a higher-order function and it uses the "then" and "else" keywords, too. But you could implement it as a function if you wanted:

  if' :: Bool -> a -> a
  if' True x _ = x
  if' False _ y = y
Then instead of writing something like this:

  max x y = if x > y then x else y
You'd write this:

  max x y = if' (x > y) x y
But the "then" and "else" remove the need for parentheses around the expressions.


if in Scheme can be, and in some cases is, implemented as a macro though. Which has arguments and can be called like a function.


Arguments are expressions in Haskell. In abstract, it uses expressions.


The em dash was in popular use long before chatgpt. It's a useful grammatical symbol and a short dash is not a good substitute. Consider whether you'd use it if it was a dedicated key on your keyboard, if so then it's worth the small inconvenience to learn how to type it.


Not just the em dash, the whole post stinks of ChatGPT, and there are two other obvious tells in the sentence I quoted.

If you know you know.


Fair enough. I'm sensitive about the em dash being used as a tell, which I've seen mentioned once or twice, because I don't want people to dumb down punctuation to avoid being confused for an LLM. I'd guess it's a temporary issue until the LLMs get so good at blending in that we can't tell anymore.


The reason someone changes a dependency at all is because they expect a difference in behavior. No one would feel the motivation to go update a dependency if they aren't getting something out of it, that's a waste of effort and an unnecessary risk.

Each person doesn't have to perform the build on their own. A build server will evaluate it and others will pull it from the cache.

The greater waste that nix eliminates is the waste of human time spent troubleshooting something that broke in production because of what should have been an innocent change, and the lost business value from the decreased production. When you trust your dependencies are what you asked for, it frees the mind of doubt and lets you focus on troubleshooting more efficiently towards a problem.

Aside, I spent over a decade on Debian derived distros. I never once had one of these distros complete an upgrade successfully between major versions, despite about 10 attempts spread over those years, though thankfully always on the first sacrificial server attempted. They always failed with interesting issues, sometimes before they really got started, sometimes borking the system and needing a fresh install. With NixOS, the upgrades are so reliable they can be done casually during the workday in production without bothering to check that they were successful. I think that wouldn't be possible if we wanted the false efficiency of substituting similar but different packages to save the build server from building the exact specification. Anything short of this doesn't get us away from the "works on my machine" problem.


> With NixOS, the upgrades are so reliable

Yeah they may be reliable _for you_. And do note this reliability doesn't come automatically with Nix's model, it is only possible because many people put a lot of effort into making it working correctly.

If you use the unstable channels, you would know. My NixOS upgrades break _all_ the time. On average, probably once a month.


Yep, anyone not getting how absolutely huge the Nix model is should just install the whole KDE desktop, the Gnome desktop, and uninstall both. Only nix can make it basically a no-op.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: