No. I thought it was widely known that kindergarten employees suffer from earing loss due to repeated exposure to high sound level from the screaming kids.
> Currently, it seems like it might be considered to be a backwards compatibility break though, as the Cargo team is unsure if some people weren’t relying on the metadata being present in the .rlib files
It seems wild to consider such intermediate files as part of public API. Someone relying on it does not automatically make it a breaking change if it’s not documented.
> With a sufficient number of users of an API, it does not matter what you promise in the contract: all observable behaviors of your system will be depended on by somebody.
Certain discussions on HN are just diagrams thanks to Laws(tm) and various one-liner tier references.
- Hyrum’s Law (85%)
- Emacs spacebar overheating (15%)
The only way to prevent the decision diagram is to anticipate them and spell them out in the last paragraph. But on the other than that doesn’t very fun right.
While I can imagine some edge cases where this approach can be meaningful, isn't that generally counterproductive?
Not only one has to be actively aware about all the behaviors they don't document (which is surely not an easy task for any large project), they have to spend a non-negligible amount of time adding randomness to it in a way that would still allow all the internal use cases to work cohesively. This means you spend less time on doing something actually useful.
Instead of randomizing, it should be sufficient to just figure out the semantics for clearly communicating what's the public APIs and stable, and what's internal and subject to change at whim. And maybe slap a big fat warning "if something is not documented - it's internal, and $deity help you if you depend on it, for we make no guarantees except that it'll break on some fine day and that day won't be so fine anymore". Then it's not your problem.
> To avoid giving the illusion that the output is stable, we deliberately introduce minor differences so that byte-for-byte comparisons are likely to fail.
Go also randomizes the iteration of map keys, to emphasize that maps are unordered and code should not rely on insertion order. For demonstration:
package main
import "fmt"
func main() {
m := map[string]int{"a": 1, "b": 2, "c": 3}
for k, _ := range m {
fmt.Println(k)
}
for k, _ := range m {
fmt.Println(k)
}
}
Very few do, and only quite modern ones. Although I believe there are hashtable libraries where the iteration order is unspecified but generally consistent, only changing when a resize shuffles the elements into different buckets.
> It seems wild to consider such intermediate files as part of public API. Someone relying on it does not automatically make it a breaking change if it’s not documented.
We are working on making this clearer with https://github.com/rust-lang/cargo/issues/14125 where there will be `build.build-dir` (intermediate files) and `build.target-dir` (final artifacts).
When you do a `cargo build` inside of a library, like `clap`, you will get an rlip copied into `build.target-dir` (final artifacts). This is intended for integration with other build systems. There are holes with this workflow though but identifying all of the relevant cases for what might be a "safe" breakage is difficult.
This metadata has been around for years, and Rust releases new versions every six weeks. Whether or not it's technically a "breaking change" or not, it's not unreasonable to spend a likely time to figure out if something will break for someone if they remove it; it's only another month and a half at most before the next chance to stabilize it comes.
At a higher level, as much as it's easier to pretend that "breaking" or "non-breaking" changes are a binary, the terms are only useful in how they describe the murkier reality of how people actually use something. The point of having those distinctions is in how they communicate things to users; developers are promising not to break certain things so that users can rely on them to remain working. That doesn't mean that other changes won't have any impact to users though, and there's nothing wrong with developers taking that into account.
As an analogy, imagine if I promise to mow your lawn every week, and then I mow your neighbor's lawn as well without making them the same promise. I notice that my old mower takes a long time to finish your lawn, and I realize that a newer electric mower with a higher power usage would help me do it faster. I need to make sure that higher power usage is safe for me to use on your property, but I'm not breaking my promise to you if I delay my purchase to check with your neighbor about whether it would be safe for theirs as well and take that into account in my decision. That doesn't mean I'm committing to only buying it if it's safe for their lawn, but it's information that still has some value for me to know in advance, and if it means that your lawn will continue to get cut with the old mower while I figure that out, it doesn't mean that I'm somehow elevating the concern of their lawn to the same level as yours. You might not choose to care about the neighbors lawn in my position, but I don't think it's particularly "wild" that some people might think it's worthwhile to take it into consideration.
I mean yeah, some things are awkward. But well some people rely on things.
And I mean it’s still possible to make the new behavior the default and add a switch to not have the metadata
Workstations routinely accommodate much more than that. The "under $1K" price referred to a 768gb build (12x 64gb sticks on a Skylake based system), you could also do a dual-socket version with twice that, at the cost of messing with NUMA (which could be a pro or a con for throughput depending on how you're spreading bandwidth between nodes).
CF sells domains at cost so you're not going to beat them on price, but the catch is that domains registered through them are locked to their infrastructure, you're not allowed to change the nameservers. They're fine if you don't need that flexibility and they support the TLDs you want.
Had you paid more attention, you would have realised it's not the classic riddle, but an already tweaked version that makes it impossible to solve, hence why it is interesting.
Mellowobserver above offers three valid answers, unless your puzzle also clarified that he wants to get all the items/animals across to the other side alive/intact.
Indeed, but, no LLM has ever realized that I don't actually specify that he can only take one thing at a time. It's natural that it would assume that (as would most humans) because it would be so heavily primed to fill that in from every other version of the puzzle it's seen.
I'd give them full credit if they noticed that, but I was also wanting to see if, given the unstated assumptions (one thing in the boat, don't let anything eat anything else, etc) they'd realize it was unsolvable.
The unstated assumptions, common in all the other puzzles of the kind, are that the farmer wants to bring everything across, nothing is to be eaten by anything else, everything must be brought over by boat, and the farmer can only bring one thing in the boat at a time. Anything left with something that will eat another thing will eat it.
This variation of the classic puzzle is unsolvable. If you have a solution, let me know.
I've always felt MDD falls short very quickly, because for non-trivial project you often want to show information related to more than one table, and so you have to fallback to writing the controller code manually anyway.
Yep, it can fall down if you force it everywhere. Read views are typically where custom "view models" are needed. For example on my current project with ~30 tables/models (https://humancrm.io), there are only a couple custom user views, and a couple custom write functions to simplify the UI flow. The rest of the API is generated from the data models. I take a pragmatic approach and don't force everything into MDD if there is a clear need.
Modern planes do not, and many older planes have been retrofitted, in whole or in part, with more modern computers.
Some of the specific embedded systems (like the sensors that feed back into the main avionics systems) may still be using older CPUs if you squint, but it's more likely a modern version of those older designs.
reply