Not necessarily. There exists a concept of light mining. It's largely a research topic more than anything deployed currently AFAIK but it's certainly possible and theory wise is secure up to more or less the same bounds as blockchain consensus in general.
The partial collision is easy to verify but hard to generate, consensus is defined as "longest chain is the source of truth". If some p2p node can present you a longer chain you switch your source of truth to that one.
Generally, yes. But remember that there are difficulty adjustments, and it's conceivable that there are two chains, one being a bit shorter but with higher difficulty, and that can have precedence over the longer but easier one. The point is that you want the chain embodying most work, no matter how long.
(And note that a) the difficulty is included in the header that gets hashed, and b) it is easy to check that the block conforms to the specified difficulty.)
That's why "heavier-chain-rule" would be a better name than "longest-chain-rule", strictly speaking.
It depends on the implementation. The naive solution is to have every client hold the full chain.
The lightweight solutions come in two flavors, the easy "good enough" solution and the much harder ideal/zero trust solution.
The easy solution (light clients) to avoiding carrying the full chain is to simply rely on some set of known/trusted "beacon" servers that you are willing to trust to relay you the chain state and send you what information you need.
The hard solution is called a "super light" client. One of the famous super light client implementations is called flyclient[1]. It relies on some tricks with proof of work to only store log2(n) blocks out of the n blocks in the whole chain. It gives you enough security to be able to verify that your chain is valid and constructed from the origin block as well as allowing it to use the longest chain rule to decide what chain is the current "official" chain for the network just like you would with a full chain history.
There's another approach called NiPiPoWs [2] (non-interactive proofs of proofs-of-work) which is conceptually similar but is a bit more generally useful (outside of just as a light client system). A few networks adopted it but idk how prevalent its use is nowadays.
Note that flyclient, NiPiPoWs and most super light clients tend to rely on properties of proof of work as well as UTxO accounting models which disqualify their use for most networks. Cardano at the very least seems to have figured their own version out [3][4] and it exists kind of as a conceptual redesign of NiPiPoWs but for stake based systems (and actually came out of NiPiPoW research).
And of course super light clients still require miners to hold the full chain state generally but there's work[5][6] on how to do "light mining" which of course would allow everybody to abandon old chain state and only keep the data they care about.
Note: a lot of the research I linked is inter related as these are the researchers I kept up more closely with last time I was deep in the ecosystem but there's a lot of work on the topic in general coming at these problems from different angles.
Yeah at it's core a blockchain based cryptocurrency is a consensus system and decentralised resource market where the resource in question is space in the blocks within some time bound and verifiable proof of the time and state they were accepted in.
That core feature of "providing a total ordering for state changes and events with formal trust bounds" turns out to have a lot of potential uses.
Now of course truly providing correct timestamps or really any clock mechanism in a trustless way turns out to be massively difficult. And not just in a blockchain but really in any decentralised/distributed system. It's a famously unsolved problem.
There's some research[1] on how to go about providing a "global time"/"global clock" for cryptocurrencies without external trust assumptions but it's extraordinarily academic and most if not all systems just assume trusted time within some bound and hope for the best.
In a sense, a POW blockchain such as bitcoin can convey global time/global clock if all participants understand the average block propagation is 10 "minutes"? Sometimes longer, sometimes shorter but converges to 10 minutes in aggregate.
Over great distances this breaks down given limits on the speeds of transmition (speed of light), however, if transmission was instantaneous (quantum entanglement?), that would solve the dilemma of what does "now" mean light-years away given our relativistic idea of time between here and there.
Oh yeah. Sorry I misspoke a bit. I should have said that global time/clocks are an unsolved problem in non-proof-of-work systems.
Proof of work does a decent job approximating a monotonic clock but that only works when you are expending obscene amounts of energy on a global scale. And like you said it breaks down over longer distances (however luckily we don't have to deal with that too much now).
But in any non-PoW system, a "trustless" global clock is extremely non-trivial.
Thats because POW solves the Byzantine Generals problem as I understand it. Before POW, that problem was intractable (extremeley non-trivial). Its always lammented that so much energy is needed to solve the problem, although that seems to be the nature of the problem. Maybe time and energy are inexorably linked.
There's certainly application outside of currencies. Bluesky/atproto for example is built on DIDs (decentralised IDs) and IPLD (the data format/standard of IPFS). Both are very heavily rooted in cryptocurrency tech.
There's a joke in the atproto community that it's a blockchain but without the currency because of this.
IPLD is literally a merkle tree data structure format standardised by IPFS which is heavily rooted in cryptocurrency and in fact has its own cryptocurrency created by the IPFS devs: Filecoin.
DIDs were created by cryptocurrency orgs. The standard was created by a bunch of cryptocurrency groups working with the W3C and the entire time it was being developed, it was derided by non-cryptocurrency people as just another way for cryptocurrency to scam people. It doesn't stop being related to cryptocurrency once you realise it's useful.
Another example of power resides where men believe it resides.
Americans are just very scared of Mossad. Tons of money goes into Holywood to make them appear invincible to the world. Fun fact, they aren't.
Intelligence agencies have great capabilities no doubt they get billions of $$$ and have utter immunity to do whatever they want in the name of national security. Why is only Mossad scary? I'd be more scared of the CIA and KGB than of Mossad.
US has never been in existential threat like Israel has been, if it were I wouldn't want to stand in their way.
> Americans are just very scared of Mossad. Tons of money goes into Holywood to make them appear invincible to the world.
I don't believe I've ever seen Mossad depicted in a Hollywood movie? I guess there was Munich. Are there specific movies/TV shows that you're thinking of?
Americans, by and large, don't even think about Mossad. Certainly not the way they're aware of the CIA and KGB - which no one should be scared of at the moment since it hasn't existed since 1991, though obviously there are modern successors.
Most of the engineers who work for my department would have had an easier time explaining big o when they had a lot less experience. Most of them haven't thought about it since college.
The other scam I get a lot of is people trying to get me to do paid work for nothing then acting offended when I don't immediately start before there's even a contract in place. There's so many idea bros now that just whack together some crap with AI. And it works fine for them up until it breaks, then they think they can just find a developer to "do the finishing touches." Not realizing that sifting through an avalanche of AI spaghetti crap to get it to work is not an easy task (and frankly not even worth doing even for money.) They can dig their own graves.
I see where you’re coming from, but there’s a small difference. Coding itself is mostly a routine tasks, turning ideas into working code. Humans really stand out in the important parts:creative thinking, planning and architecting the system, deciding what it should do, how it should do, finding problems, checking code quality, and making smart decisions that a tool can’t. AI can help with the routine work, but the creative and thinking parts are still human.And this is exactly where developers should focus and evolve themselves.
> creative thinking, planning and architecting the system, deciding what it should do, how it should do, finding problems, checking code quality, and making smart decisions that a tool can’t.
Are you aware that there are people that think that even now AI can do everything you describe?
The reason crappy software has existed since...ever is because people are notoriously bad at thinking, planning and architecting systems.
When someone does a "smart decision", it often translates to the nightmare of someone else 5 or 10 years down the line. Most people shouldn't be making "smart decisions"; they should be making boring decisions, as most software is actually a glorified crud. There are exceptions, obviously, but don't think you're special - your code also sucks and your design is crap :) the goal is often to be less sucky and less crappier than one would expect; in the end, its all ones and zeros, and the fancy abstractions exist to dumb down the ones and zeros to concepts humans can grasp.
A machine can and will, obviously, produce better results and better reasoning than an average solution designer; it can consider a multitude of options a single person seldom can; it can point out from the get-go shortcomings and domain-specific pitfalls a human wouldnt even think of in most cases.
So go ahead, try it. Feed it your design and ask about shortcomings; ask about risk management strategies; ask about refactoring and maintenance strategies; you'd probably be surprised.
I completely understand what you mean, as creator of boring and stable solutions deployed in production and in some cases still there untouched for nearly two decades. But no, I don't agree about the "it can consider a multitude of options a single person seldom can" part since it's not really what is happening right now, it does not work this way.
> So go ahead, try it. Feed it your design and ask about shortcomings; ask about risk management strategies; ask about refactoring and maintenance strategies; you'd probably be surprised.
Answers to this and other kinds of questions are in my opinion just a watered down version of actual thinking currently. Interesting but still too simple and not that actionable. What I mainly use LLMs for is exploring the space of solutions, that I will then investigate if there is something promising (mainly deep research of topics or possible half broken/random solutions to problems). I'm not really interested in an actual answer most of the time but more avenues for investigation that I didn't consider. Anyway, I'm not saying that AI is useless right now.
People often blame LLMs for bad code, but the real issue is usually poor input or unclear context. An LLM can produce weak code if you give weak instructions but it can also write production ready code if you guide it well, explain the approach clearly, and mention what security measures are needed. The same rule applies to developers too. I’m really surprised to see so much resistance from the developer community, instead, they should use AI to boost their productivity and efficiency. Personally Iam dead against using CLI tools, istead IDE based tools will give your better visibility on code produced and betetr control over the changes.
I don't want to sound cynical but a lot of it has to deal with the simplicity of the language. It's much harder to find a good Rust engineer than a C one. When all you have is pointers and structs it's much easier to meet the requirements for the role.
Does every device hold the chain of blocks?
reply