This argument isn't _that_ compelling because: send today's tech back a century, use that as your aliens in case 'c'. They would 100% be able to see that tech. They wouldn't know what the hell they're looking at, or be able to do much about it, but they'd see it.
If we’re talking about aircraft, the combination of modern radar mitigation and modern sensor packages would allow a time traveling plane or drone to be effectively invisible in 1925.
Sure they’re not going to bend light around themselves, but they can fly outside of visual range and 1925 radar technology won’t stand a chance of detecting them.
Maybe this is a stupid question, but aren't they still going to be loud as fuck, and quite visible? How high do you need to be before you're not audible or visible? I guess go at night, sure, but...isn't all that crap more about being hard to precisely target than it is about being literally undetected?
Not a stupid question at all. I’m not sure about piloted aircraft, but drones currently operate at altitudes where they can’t be seen or heard from the ground.
It depends where you send it / why. There's lots of places you can send it where there's just nobody to see it. We still occasionally find an uncontacted tribes out there after all, so if someone didn't want to be seen (or even just seen in a place full of cameras), it would be trivial.
Sending the tech from 100 years in the future to today is not directly comparable to sending today's tech 100 years back.
By 2125, military aircraft will probably be silent, able to rapidly ascend to 100,000 feet (out of visible sight), and maybe even invisible. So people today, faced with properly-done future technology, can't see it at all.
I _suspect_ they mean that certs imported into MMC in Windows can be accessed at magic paths, but...yeah linux can do that because it skips the step of making a magical holding area for certs.
there are magical holding areas in Linux as well, but that detail is up to TLS libraries like openssl at run-time, and hidden away from their clients. There are a myriad of ways to manage just ca certs, gnutls may not use openssl's paths, and each distro has its own idea of where the certs go. The ideal unix-y way (that windows/powershell gets) would be to mount a virtual volume for certificates where users and client apps alike can view/manipulate certificate information. If you've tried to get a internal certs working with different Linux distros/deployments you might be familiar with the headache (but a minor one I'll admit).
Not for certs specifically (that I know of) but Plan9 and it's derivaties are very hard on making everything VFS abstracted. Of course /proc , /sys and others are awesome, but there are still things that need their own FS view but are relegated to just 'files'. Like ~/.cache ~/.config and all the xdg standards. I get it, it's a standardized path and all, but what's being abstracted is here is not "data in a file" but "cache" and "configuration" (more specific), it should still be in a VFS path, but it shouldn't be a file that is exposed but an abstraction of "configuration settings" or "cache entries" backed by whatever thing you want (e.g.: redis, sqlite, s3,etc..). The windows registry (configuration manager is the real name btw) does a good job of abstracting configurations, but obviously you can't pick and choose the back-end implementation like you potentially could in Linux.
> The windows registry (configuration manager is the real name btw) does a good job of abstracting configurations, but obviously you can't pick and choose the back-end implementation like you potentially could in Linux.
In theory, this is what dbus is doing, but through APIs rather than arbitrary path-key-value triplets. You can run your secret manager of choice and as long as it responds to the DBUS API calls correctly, the calling application doesn't know who's managing the secrets for you. Same goes for sound, display config, and the Bluetooth API, although some are "branded" so they're not quite interchangeable as they might change on a whim.
Gnome's dconf system looks a lot like the Windows registry and thanks to the capability to add documentation directly to keys, it's also a lot easier to actually use if you're trying to configure a system.
The version I'm describing has it physically sitting in front of you at the time, so you can see that the colours haven't been changed "on the fly" after you pick an edge. In this version:
(A) I colour it;
(B) I cover the vertices so you can't see any of them, but I can no longer change them;
(C) You choose the edge, and I reveal the endpoints.
Converting this to a digital version requires further work ... my intent here was to explain the underlying idea that I can prove (to some degree of confidence) that I have a colouring without revealing anything about it.
So just off the top of my head, for example, I can, for each vertex, create a completely random string that starts with "R", "G", or "B" depending on the colour of the vertex. Then I hash each of those, and send you all of them. You choose an edge and send me back the two hashes for the endpoints, and I provide the associated random strings so you can check that the hashes match.
This reminds me of the "Where's Waldo (Wally in UK)" example:
You can prove that you found Wally with a large piece of paper with a hole in it. You move the hole over Wally, and the person you're sitting with can see you found it, but he's no wiser about where.
Another way is to get them to put marks/signatures over the back of the blank. Overlay to e blank, and cut Wally out of it where he occurs on the actual page and give them the cutout.
Not really. You're talking about a fungus creating essentially a nuclear reactor inside of its cells, and creating it out of fuel that's not good enough to make a nuclear reactor in the first place (it at one time was, but now it's a mess of decay products and nonsense).
Reactors also take a certain amount of mass. You can't just squish two tiny microgram particles together and hope to get anything going.
Technically I guess I can't prove it wouldn't work if you make it dense/hot/covered-in-reflectors enough, but I'm pretty sure it's _well_ beyond the limits of what a fungus could even conceivably do.
Note that the only numbers on that page have various critical masses in kg. That's a bigass fungus.
And that's still not getting into: the "fuel" here is real shit. It's gotta be beyond its useful life even if you ignore that the thing melted down and corroded and blew up.
Most of what we live on, the vast majority, is iron or lighter. So it's more that we're sprinkled with supernova debris. But we are made out of stardust, so that's something.
An interesting fact is that while almost all of the Solar System has started as gas, which has then condensed here into solid bodies that have then aggregated into planets, a small part of the original matter of the Solar System has consisted of solid dust particles that have come as such from the stellar explosions that have propelled them.
So we can identify in meteorites or on the surface of other bodies not affected by weather, like the Moon or asteroids, small mineral grains that are true stardust, i.e. interstellar grains that have remained unchanged since long before the formation of the Earth and of the Solar System.
We can identify such grains by their abnormal isotopic composition, in comparison with the matter of the Solar System. While many such interstellar grains should be just silicates, those are hard to extract from the rocks formed here, which are similar chemically.
Because of that, the interstellar grains that are best known are those which come from stellar systems that chemically are unlike the Solar System. In most stellar systems, there is more oxygen than carbon and those stellar systems are like ours, with planets having iron cores covered by mantles and crusts made of silicates, covered then by a layer of ice.
In the other kind of stellar systems, there is more carbon than oxygen and there the planets would be formed from minerals that are very rare on Earth, i.e. mainly from silicon carbide and various metallic carbides and also with great amounts of graphite and diamonds.
So most of the interstellar grains (i.e. true stardust) that have been identified and studied are grains of silicon carbide, graphite, diamond or titanium carbide, which are easy to extract from the silicates formed in the Solar System.
The notion of algorithms and computer science were invented to discuss the behavior of infinite sequences: examining if there’s descriptions of real numbers. This was extended by connecting complicated infinite structures like the hyperreals with decisions theory problems.
That descriptions of other infinite sets also corresponds to some kind of algorithm seems like a natural progression.
That it happens to be network theory algorithms rather than (eg) decision theory algorithms is worth noting — but hardly surprising. Particularly because the sets examined arose from graph problems, ie, a network.
I've done a bunch of theoretical PL work and I find this to be a very surprising result... historically the assumption has been that you need deeply "non-computational" classical axioms to work with the sorts of infinites described in the article. There was no fundamental reason that you could give a nice computational description of measure theory just because certain kinds of much better-behaved infinities map naturally to programs. In fact IIRC measure theory was one of the go to examples for a while of something that really needed classical set theory (specifically, the axiom of choice) and couldn't be handled nicely otherwise.
Much of your comment seems to be about your culture — eg, assuming things about axioms and weighting different heuristics. That we prioritize different heuristics and assumptions explains why I don’t find it surprising, but you do.
From my vantage, there’s two strains that make such discoveries unsurprising:
- Curry-Howard generally seems to map “nice” to “nice”, at least in the cases I’ve dealt with;
- modern mathematics is all about finding such congruences between domains (eg, category theory) and we seem to find ways to embed theories all over; to the point where my personal hunch is that we’re vastly underestimating the “elephant problem”, in which having started exploring the elephant in different places, we struggle to see we’re exploring the same object.
Neither of those is a technical argument, but I hope it helps understand why I’d be coming to the question from a different perspective and hence different level of surprise.
The reason people had these assumptions is because people have been trying (unsuccessfully) to find a constructive interpretation of this stuff for a very long time. Even very fundamental results in measure theory like the Heine-Borel theorem typically require some extension to traditional constructive axioms. Like I absolutely get where you are coming from, but there are a large number of "nice" classical results that definitely do not have constructive counterparts. It's cool that descriptive set theory is not one of them but it's not obvious by any stretch of the imagination, and the pattern you're using to say that it's probably true ("Curry Howard maps nice to nice") is not great process IMO since it would fail in a lot of other cases.
Perhaps it’s that a global solution in the language of set theory was hard to find, but distributed systems — which need to provide guarantees only from local node behavior, without access to global — offered an alternate perspective. They weren’t designed to do so but they ended up being useful.
They’re literally exploring the same object: properties of networks.
That you can express the constraints of network colorings (ie, the measure theory problem) as network algorithms strikes me as a “well duh” claim — at least if you take that Curry-Howard stuff seriously.
Curry-Howard is not some magic powder you can just sprinkle around to justify claims. The isomorphism provides a specific lens to move between mathematics and computation. It says roughly that types and logical propositions can be seen equivalently.
Nothing in the result in the article talks about types, and even if it could be, it’s not clear that the CH isomorphism would be a good lens to do so.
Curry-Howard literally says that a proof your object has a property is equivalent to an algorithm which constructs a corresponding type.
I’m not “sprinkling magic powder”, but using the very core of the correspondence:
A proof that your network has some property is an algorithm to construct an instance of appropriate type from your network.
In this case, we’re using algorithms originally designed for protocols in CS to construct a witness of a property about a graph coloring. In the article, it details exactly his realization this was true — during a lecture, seeing the types of things constructed by these algorithms corresponding to the types of objects he works with.
Do you have any actual evidence that this result can be viewed as an instance of CH?
The networks on the measure theory side and on the algorithmic side are not the same. They are not even the same cardinality. One has uncountably many nodes, the other has countably many nodes.
The correspondence outlined is also extremely subtle. Measurable colorings are related to speed of consensus.
You make it sound like this is a result of the type: "To prove that a coloring exists, I prove that an algorithm that colors the network exists." Which it is not, as far as I understand.
It seems to me you are mischaracterizing CH here as well:
> A proof that your network has some property is an algorithm to construct an instance of appropriate type from your object.
A proof that a certain network has some property is an algorithm that constructs an instance of an appropriate type that expresses this fact from the axioms you're working from.
> You make it sound like this is a result of the type: "To prove that a coloring exists, I prove that an algorithm that colors the network exists." Which it is not, as far as I understand.
This is the crux of the proof, as I understand it: to classify network coloring measurability, I classify algorithms that color the network.
Which I can do because there’s a correspondence between network colorings (in graph theory) and algorithms that color networks (in CS). Which I’m arguing is an instance of CH: they’re equivalent things, so classifying either is classifying both.
> They are not even the same cardinality. One has uncountably many nodes, the other has countably many nodes.
[…] Measurable colorings are related to speed of consensus.
Yes — this is why the work is technically impressive, because proving the intuition from above works when extending to the infinite case is non-trivial. But that doesn’t change that fundamentally we’re porting an algorithm. I’m impressed by the approach to dealing with labels in the uncountable context which allows the technique to work for these objects — but that doesn’t mean I’m surprised such a thing could work. Or that work on network colorings (in CS) turned out to have techniques useful for network colorings (in math).
> It seems to me you are mischaracterizing CH
You then go on to make some quibble about my phrasing that, contrary to your claim about mischaracterizing, doesn’t conflict with what you wrote.
Edit: removing last paragraph; didn’t realize new author.
CH as I understand it has nothing to do with this. As an example that illustrates why, consider the simple infinite coloring discussed in the article that uses the axiom of choice. You could not write an algorithm that actually performs this coloring (because Axiom of Choice, and because it requires uncountably many actions). CH says that the statement "all such graphs can be colored" can be computed (in finitely many steps) by a program from the axioms. Even though the colorings can-not be done by a computation.
What CH does not allow you to do is turn an existence proof (a coloring exists) into a constructive proof (a means to actually construct such a coloring). In fact, this is generally not true. Mathematical statements correspond to computations in a much more subtle and indirect way than that.
Honestly, I get the impression that you have a very superficial understanding of the topics at hand, but I am far from an expert myself. If you really know a way to see this as an instance of CH I would be very intrigued to learn about it.
> If you are smuggling large amounts of fentanyl or weapons into another country and they shoot you that seems pretty ok.
Assumes facts not in evidence.
Also, there's great reasons to have punishments for crimes that are not just summary executions. Even if you have a warped morality where all criminals of any sort should die, there's _still_ great reasons to not allow that to be chosen by the closest person with a gun. That way lies chaos and corruption.
> Even on Earth (the one place we know for sure can support life), life has only occurred once.
We don't actually know that at all. It could have happened many times and one line won out, it could have been more of a diffuse process than a single event (picture how microbes share genetic material ~freely but even less structured), or there could be a ton of life out there on Earth that's from a completely different tree. We really have very little idea what's living around us.
If there is a different tree of life right here on Earth and we don't know about it, that would cast doubt on our ability to detect life in worlds light years away. Also, if life had multiple false starts here on Earth, that does also suggest that it is very difficult to take hold even on the original Goldilocks planet. The idea that multiple versions of "life" co-developed and became a single strain is quite interesting to consider. I wonder what else needs to be true to support that theory.
> If there is a different tree of life right here on Earth and we don't know about it, that would cast doubt on our ability to detect life in worlds light years away.
Hm, I don't think it does. The problem is vastly different. Here, on Earth the problem is: sift through all of life for some that's different than the rest. A _hard_ problem with how little of microscopic life we've cataloged completely and with how much of the volume of Earth we can't see.
The problem looking for life in the stars is more: find evidence of _any_ life, so radio signals or chemicals that can't reasonably come from anything else but biology. Those are hard as hell, but fundamentally different.
> Also, if life had multiple false starts here on Earth, that does also suggest that it is very difficult to take hold even on the original Goldilocks planet.
That would be interesting. I kind of guess it's less likely than some kind of winner-take-all outcompeting thing, but who knows. Life that we see is just very good at spreading, escaping and holding on tight.
The economies of this make sense if there's more than one shipping company (they'll be cheaper and get more business) or if they just change how they charge, neither seems that hard.
reply