Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Viper: a new programming language from Ethereum (github.com/ethereum)
192 points by RabbitmqGuy on May 22, 2017 | hide | past | favorite | 123 comments


As a bonus, it wraps all transactions in the VIP monad, where if you are an important person in Ethereum, you can get your poorly written contracts reversed.

Sorry, since the DAO fiasco I don't see Ethereum having any purpose. If a poorly written/unfair contract should be invalidated, I would rather have e the judicial system with hundreds of years of jurisprudence make that decision, than a few people at Ethereum.


The DAO contract was not invalidated. It is still there, on the blocks where it was deployed.

The fact that the people running the nodes stopped seeing that fork of the chain as the interesting one is distributed consensus at work. Anyone that wants to keep observing that original fork can do so, that fork is actually alive in the form of Ethereum Classic.

Ethereum was forked after the DAO fail because people saw more value in a forked chain with new rules. Speaking about contract reversals or invalidations is misleading. That is simply not possible, because Blockchain. And while anyone can decide to fork a blockchain, building consensus around the fork is what gives it its value.

The DAO was a marketing horror because it promised a world ruled by code, built on top of blockchain tech. Blockchains, though, don't run on code but on consensus.


Again, though, if I want the interpretation of my contracts to boil down to what some group of people weighted by some measure of power thinks, I've got more faith in a conventional country that allocates power to elected officials etc. than a loose confederation of anonymous drug barons weighted by whoever bought new hardware most recently.


What value does the Ethereum protocol provide when a disagreement splits the network in two, without any recourse for either of the parties involved?

Which hard social/computer science problems are solved through a "consensus" algorithm that essentially constitutes denying the existence of those who disagree?

With Bitcoin, determining whether a fork is successful is very simple: are the tokens on its blockchain as liquid as those on the Bitcoin one? If they are, they are just as useful for transferring value as the "original" Bitcoin.

What metric would we use to determine whether an Ethereum fork is successful or not? If nothing is lost when a consensus algorithm fails to establish consensus, what value did it provide in the first place?

Market cap is an arbitrary measure because, in and of itself, it solves nothing. At any time, I can fork Ethereum with a buddy of mine, trade the resulting tokens at whichever price the original Ethers trade at, and the market cap would be the same. Or, alternatively, trade them at 1/1000 the Ether price, but change the protocol such that 1000 times as many exist.


>What value does the Ethereum protocol provide when a disagreement splits the network in two, without any recourse for either of the parties involved?

Don't give splits such importance. The nature of blockchains means that the network is being constantly forked into many chains at its tip. It doesn't matter which protocol is running on that blockchain.

>Which hard social/computer science problems are solved through a "consensus" algorithm that essentially constitutes denying the existence of those who disagree?

Existance is not being denied. The chain is still there. The problem is that the rest of the people, a majority of nodes, prefer another version of the chain.

>With Bitcoin, determining whether a fork is successful is very simple: are the tokens on its blockchain as liquid as those on the Bitcoin one? If they are, they are just as useful for transferring value as the "original" Bitcoin.

>

>What metric would we use to determine whether an Ethereum fork is successful or not? If nothing is lost when a consensus algorithm fails to establish consensus, what value did it provide in the first place?

Proof of Work consensus algorithms don't "fail" to establish consensus. They are designed to swiftly choose one of many possible chains, based on a few simple parameters (the most important one being the amount of work needed to create the chain). That said, value is an off-chain concept. Just as US dollars have value because we all know they are widely accepted, Ethereum has value because there is a team of developers plus a big community plus a lot of nodes, and all that together allows some use cases that make Ethereum have value. Ethereum Classic has less value because it is not as widely accepted. Bitcoin has a lot of value because as of now it is the most popular cryptocurrency. Nothing about this has to do with the tech itself, but with the way we see the tech.


> that fork of the chain [...] is distributed consensus at work

If you have two consensuses, you don't have consensus.

(Edit: Yes, I know, my quote is misleading, but I claim that this distorted version does somewhat represent the gist of the author's words.)


> Sorry, since the DAO fiasco I don't see Ethereum having any purpose.

From the point of view of speculators in Ethers, the purpose of Ethereum's continued existence is to avoid losing money. Exactly the same motivation that caused them to redefine the core protocol, in order to accommodate those who invested in a badly written contract.

Also, to be fair, Ethereum Classic still exists. Just because some couldn't accept reality doesn't mean the benefits of a Turing-complete blockchain are gone.


You'll note that the language in question is designed with an emphasis on guaranteeing program correctness in certain respects.

You should also be aware that Ethereum Classic did not implement the DAO hack fork and continues to be used, and that this language should work just as well there.

Not to say that this language is known to not have important bugs (idk), but, y'know, it is definitely work towards contract safety.


Whenever I read this argument being made, I cannot help but think "didn't you read the fineprint?" :)


Decidability. :) I'd love to see that show up in more languages. I believe it allows for better abstractions without optimization deficiencies because it's all decidable.


The big problem with that is that if a language is decidable it can't be Turing complete. That means that a large number of useful programs are just not going to be expressible. That said for something g like this it's a perfect fit to have it be decidable.


There are no useful programs that require Turing completeness. That's basically by definition. Turing completeness is what allows programs to have non-productive infinite loops. Any system that guarantees productivity isn't Turing complete. All useful algorithms are productive. The only thing you can do with a non-productive algorithm is convert electricity to heat.


I took a course in college where we used Coq to build up an imperative programming language, and I waa surprised how much you could do (i.e. essentially anything you needed to) with a non-Turing complete language. I think that the traditional computer science curriculum makes such a big deal about Turing machines (and not necessarily wrongly so) that students just assume that Turning completeness is a natural goal to strive for rather than just a property with tradeoffs, just like any other property. It certainly doesn't help that it seems harder to create a language that's not Turing complete than one that is, at least if you aren't explicitly trying to avoid it; just look at all the random things that have been discovered to be accidentally Turing complete over the years.


> Any system that guarantees productivity isn't Turing complete. All useful algorithms are productive.

But it does not follow that all useful algorithms can be implemented on a language that "guarantees productivity". Turing proved that the halting problem cannot be solved on a Turing machine. This opens the possibility for the existence of programs that might be "productive" in your sense, but might also never stop, and it is not possible to know in advance.

> The only thing you can do with a non-productive algorithm is convert electricity to heat.

I am not convinced of this at all. Perhaps true if you are talking about banking systems or web applications, but probably not true if you are talking about AI. Intermediary states of an endless computation might be interesting. Maybe this is a way to obtain unbounded creativity. Maybe this is the way to build minds. We don't know enough.

A more general observation: I find that we live in an era that is too obsessed with productivity at the cost of fundamental research, dreaming and imagination. I am convinced that the latter mindset is the only one that can bring qualitative changes to our culture and civilisation, and I think that our long-term survival depends on such qualitative jumps.

Of course, I also understand that someone has to take care of the plumbing...


> Turing proved that the halting problem cannot be solved on a Turing machine. This opens the possibility for the existence of programs that might be "productive" in your sense, but might also never stop, and it is not possible to know in advance.

Maybe. I struggle to imagine a practical case where we want to run a program that we didn't and couldn't know whether it worked though.

> I am not convinced of this at all. Perhaps true if you are talking about banking systems or web applications, but probably not true if you are talking about AI. Intermediary states of an endless computation might be interesting. Maybe this is a way to obtain unbounded creativity. Maybe this is the way to build minds. We don't know enough.

This is ridiculous reasoning. "We don't understand X, we don't understand Y, therefore X might be related to Y."

> A more general observation: I find that we live in an era that is too obsessed with productivity at the cost of fundamental research, dreaming and imagination. I am convinced that the latter mindset is the only one that can bring qualitative changes to our culture and civilisation, and I think that our long-term survival depends on such qualitative jumps.

> Of course, I also understand that someone has to take care of the plumbing...

Choosing to look at non-halting programs rather than halting programs is like choosing to look at crystal energy instead of nuclear fusion. A certain amount of willingness to question baseline assumptions is valuable, but I think our long-term survival depends far more on being willing to acknowledge the fundamental results of the field and put in the hard engineering work necessary to achieve things under the constraints of reality, rather than trying to wave them away.


> Maybe. I struggle to imagine a practical case where we want to run a program that we didn't and couldn't know whether it worked though.

The vast majority of the programs used in real life are not formally proven and are written in Turing complete languages. That is already the world you live in.

> This is ridiculous reasoning. "We don't understand X, we don't understand Y, therefore X might be related to Y."

I didn't say that.

Notice that a (very simplified) model of the human brain, the recurrent neural network, is already Turing complete. Notice also that humans (and Darwinian evolution, for that matter) display a capacity for creativity that has not been successfully replicated by AI efforts yet. Notice further that non-halting computations (e.g. infinitely zooming a Mandelbrot set) are the closest thing we have to unbounded creativity.

Maybe I'm wrong, of course.


> The vast majority of the programs used in real life are not formally proven and are written in Turing complete languages. That is already the world you live in.

They're unproven, not believed to be unprovable. (Indeed, almost all practical programs make at least some effort to offer evidence and informal arguments for their correctness, via tests, comments on unsafe constructs, and so on).

> Notice also that humans (and Darwinian evolution, for that matter) display a capacity for creativity that has not been successfully replicated by AI efforts yet. Notice further that non-halting computations (e.g. infinitely zooming a Mandelbrot set) are the closest thing we have to unbounded creativity.

This is meaningless woo.


> They're unproven, not believed to be unprovable.

A decidable language (let's say Ld) is less powerful than a Turing-complete language (Lt). This means that there are some computations that can be expressed in Lt but not in Ld, and that there are some problems that can be solved in Lt but not in Ld. These are theorems of theoretical computer science. I don't believe anyone has a clear idea of the practical implications of this limitation, and how many of the algorithms in current use are only possible in Lt.

My guess is that the complexity of the software in use nowadays vastly exceeds our ability to do such an analysis, but maybe you know something I don't. Otherwise, while it is true that they are not believed to be unprovable, it is also true that they are not believed to be provable.

> This is meaningless woo.

I'm not sure I should reply to this, because it is just name calling, but for other people reading this (and you, if you're still interested): people who study artificial creativity and related fields such as Artificial Life take these ideas seriously and have interesting philosophical definitions and mathematical formalisms to address them. I have been to conferences sponsored by serious universities and other organisations such as ACM and IEEE where ideas such as the generative power of the Mandelbrot set are seriously discussed. There are several attempts to quantify creativity and to connect the idea of creativity with computer science.

It is important to not have a mind so open that the brain falls off, but I suggest that you may be going too far in the opposite direction.


> I don't believe anyone has a clear idea of the practical implications of this limitation, and how many of the algorithms in current use are only possible in Lt.

> My guess is that the complexity of the software in use nowadays vastly exceeds our ability to do such an analysis, but maybe you know something I don't. Otherwise, while it is true that they are not believed to be unprovable, it is also true that they are not believed to be provable.

Mathematicians and computer scientists deliberately seek out problems that are not in Ld, and have only found constructed examples, mostly minor variations on the same "diagonalization" argument. Any algorithm that is known to work is necessarily in Ld, and those constitute the overwhelming majority of algorithms that are published or used, for obvious reasons. (And those that are merely believed to work are, in the overwhelming majority of cases, believed to work for reasons that translate directly into a belief that they could be proven to work).


> have only found constructed examples, mostly minor variations on the same "diagonalization" argument

Anything that is Turing-complete cannot be implemented in Ld (by definition). Off the top of my head, this includes: Recurrent Neural Networks, CSS, Minecraft, TrueType fonts, x86 emulators (MOV is Turing-complete) and Conways' Game of Life.

Of course you can argue that lots of things can be implemented in an Ld language. Sure, I have nothing against it, but it's not like desiring Turing-completeness is an absurd requirement.

> Any algorithm that is known to work is necessarily in Ld

No, most algorithms are known to work correctly for the common cases that are tested for + all the edges cases the developers can think of or encounter in real life. For non-trivial software, this is a minuscule subset of the possible states. Lots of things are surprisingly Turing-complete, and it is not trivial to prevent this for a sufficiently complex system.


> Anything that is Turing-complete cannot be implemented in Ld (by definition). Off the top of my head, this includes: Recurrent Neural Networks, CSS, Minecraft, TrueType fonts, x86 emulators (MOV is Turing-complete) and Conways' Game of Life.

You're begging the question - your Turing-complete algorithm is "evaluate an expression in a Turing-complete language". It's easy to make a language accidentally Turing-complete (especially when you're thinking in a Turing-complete language), but that completeness is undesirable, and in realistic use cases it's harmful rather than helpful. No-one wants to sit waiting indefinitely to see whether a web page or font is actually going to render or not (and indeed we often end up going to great lengths to make these things Turing-incomplete in practice with timeouts and the like).

> No, most algorithms are known to work correctly for the common cases that are tested for + all the edges cases the developers can think of or encounter in real life. For non-trivial software, this is a minuscule subset of the possible states.

You're not contradicting me, you're saying "most algorithms are not known to work". I don't think that's true in the sense of "algorithms" published in journals/textbooks. I would agree that most code isn't known to work, but that translates into reality: most code doesn't work, most programs have cases where they just break and also just crash every so often.

> Lots of things are surprisingly Turing-complete, and it is not trivial to prevent this for a sufficiently complex system.

For a complex system written from scratch, it's easy enough with the right tools. If you build it in a non-Turing-complete language it won't be Turing-complete, and if something is hard to do in a total way it's probably a bad idea.

Porting an existing system would be much harder, I'll agree, and porting the existing code/protocol ecosystem would be a huge ask. (I do think it's necessary though; the impact of malware attacks gets worse every day, the level of bugginess we're used to seeing in software is rapidly ceasing to be good enough).


This entire discussion about Turing completeness is completely irrelevant. The problem that's relevant to software verification isn't the halting problem in its original formulation, but a simple corollary of it, often called "bounded halting" (which serves as the core of the time hierarchy theorem, possibly the most important theorem in computer science), which states that you cannot know whether a program halts within n steps in under n steps. The implication is that there can exist no general algorithm for checking a universal property (e.g. the program never crashes on any input) that is more efficient than running the program on every possible input until termination.

But bounded halting holds not only for Turing complete languages, but for total languages, too (using the same proof). In fact, it holds even for finite state machines (with a different proof). This is why even the verification of finite state machines is generally infeasible (or very expensive under restricted conditions) both in theory and in practice.


To my mind all that says is that even general total languages or finite state machines allow expressing too much. Surely one can solve this by working in a language where functions are not merely total but come with explicit bounds on their runtime. This seems like a good fit for a stratified language design like Noether - we might have to resort to including a few not-provably-bounded total functions (or even not-provably-total functions) in a practical program, but hopefully would be able to minimize this and make the parts where we were making unproven assumptions very explicit.


There are languages where all functions run in PTIME (https://www.cs.toronto.edu/~sacook/homepage/ptime.pdf), but this doesn't help.

If your program is a FSM, then your functions run in constant time, and still verification is infeasible. Search for "Just as a quick example of why verifying even FSMs is hard" in my blog post http://blog.paralleluniverse.co/2016/07/23/correctness-and-c...

Languages that are so limited as to be unusable are still PSPACE-complete to verify.

It is simply impossible to create a useful programming language where every program is easily verifiable. Propositional logic -- the simplest possible (and too constrained to be useful) "language" -- is already intractable. Feasible verification in the worst-case and computation of even the most limited kind are simply incompatible. As anyone who has done any sort of formal verification -- it's hard. There are two general ways of getting around this difficulty. Either we verify only crude/local properties using type checking/static analysis (both are basically the same abstract interpretation algorithm), or taylor a verification technique to a small subset of programs. The other option is, of course, to work hard.


You're assuming that we can only ever add to languages, not constrain them. Yes if your program has 6 decision points and you model each possible combination explicitly then your model will have 2^6 possible states - but you don't have to model it that way. In previous discussions you've said that humans understand these programs by using symmetries to drastically reduce the size of the state space - let the tool do the same thing.


Sure, and model checkers and static analyzers do use symmetries and much more sophisticated ideas (abstract interpretation), but the point remains that it is provably impossible to create a language where every program is feasibly verifiable. My example wasn't even the most restrictive (even though it was too restrictive to be generally useful): it didn't have loops or recursion or higher-order functions. But even a language that doesn't have functions or branching at all, and has only boolean variables is already NP-complete to verify. Computation -- even of the most restrictive kind -- and verification are essentially at odds. In a way, that is the defining feature of computation: a mathematical object constructed of very simple parts, whose composition quickly creates intractability.

On the other hand, specific programs, even in Turing-complete languages, can be feasibly verifiable. We can make our tools more sophisticated, but we can't make our languages restrictive enough that verification would always be possible in practice. There may be things languages can do to help, but changing the expressiveness of the computational model is simply not one of them.


I'm already used to working without unbounded loops and moving away from unrestricted recursion; I can appreciate that unrestricted boolean expressions are complex to verify but I'm just not convinced that day-to-day software development (whether we call it computation or something else) actually needs such things. To my mind verification goes hand-in-hand with implementation, provided the language gives you the tools to do that - maybe I'm arguing for a metalanguage rather than a language, but if I want to produce a program with particular properties that seems easy to achieve by constraining myself to a sublanguage where these properties are true by construction, and then writing my program.


> > the existence of programs that might be "productive" in your sense, but might also never stop, and it is not possible to know in advance

> I struggle to imagine a practical case where we want to run a program that we didn't and couldn't know whether it worked though.

First-order theorem proving is a very practical case. Proof in first-order logic is "recursively enumerable", which means that we can write provers that, given a formula F, produce a proof of F in finite time if such a proof exists. But in general we don't know if a proof exists or how long it will take to find it. So if we start a prover, it will just sit there and not look "productive" until it either returns a proof or we kill it because we're tired of waiting.


But we wouldn't want to just leave such a theorem prover running indefinitely. Far more practical to have it check all possible proofs of length up to n (trivial to do in a total language), or at most run it for a particular number of days, months or years. At some point you do end the experiment.


Sure. You usually run the prover with a timeout. The point is, this is a practical example of running a program where you cannot predict whether it will "work" or not. Additionally, when you reach the timeout, you will not have gained any information, i.e., the process is not "productive" in the sense discussed above.

Edit: Can't reply to your reply. Interesting. Yes, true, "there is no proof of length < N" is some information, but it's not information that tells you anything about the provability of your formula. It's not productive information. You can keep trying, but you won't be able to overturn the semidecidability of first-order logic in a Hacker News comment thread.


If the prover simply went into an infinite loop while checking the first proof candidate and never got as far as checking the second one, would it be working correctly? I would consider such a prover not useful, and a useful prover would be one that told me something when reaching the timeout, such as that there were no proofs for the given theorem shorter than a certain length.


This is handled in university CS algorithms courses. In such a case, dovetailing or interleaving the execution of the TM on the input would be used. Sipser pp. 150–152 https://courses.engr.illinois.edu/cs373/sp2009/lectures/old/...


So I might have made this a bit confusing too by conflating the two things in my original post. The real thing is that there are a lot of undecidable programs that don't require Turing completeness. And a lot of those useful programs are undecidable, an example of a really simple undecidable program:

    10 PRINT "Enter your name"
    20 INPUT NAME$
    30 PRINT "Hello ", NAME$
    40 IF NAME$ <> "Ralph" THEN
    41 GOTO 10
    42 END IF
    50 PRINT "Goodby Ralph"
That program, because it might loop indefinitely isn't decidable and isn't possible to describe with Viper by design. This kind of program is pretty analogous to basically every program that communicates with the network, user, or some other external system.


"Undecidability" doesn't apply to interactive systems. Inputs must be finite: https://en.wikipedia.org/wiki/Undecidable_problem In your example, the input (an infinite stream of strings not equal to "Ralph") is not finite.

Instead, think of an interactive system as a finite state machine. The question to ask then is, do any of my state transitions require a Turing-complete language to express? In 99.999% of all software ever written the answer is "no". Inability to prove termination is almost certainly a software bug or flaw in the design. The theoretical exceptions are exotic algorithms for which termination is not provable; the practical exceptions are probabilistic algorithms for which termination is not in fact guaranteed (e.g. a naïve hash table implementation).

The difference is important. A state machine, even one that represents a non-terminating system over infinite input (e.g. a web server), still can be reasoned about. This is exactly the domain of tools like TLA+, which can answer questions such as, "is my system guaranteed to always eventually take action X given input Y?" and "is my system guaranteed to never terminate?" But to be able to answer such questions, it's a prerequisite that the transitions between states do in fact terminate.


It's not a given that the input is infinite. If the used types in "Rob" then goes away forever, the program will neither terminate nor process an infinite number of non-"Ralph" strings. It will sit just sit there waiting (possibly in a loop, possibly not) for a hardware interrupt that will never arrive.


For decidability to apply, the input needs to be known a priori in its entirety. Decidability is a mathematical statement about natural numbers (or any other countably infinite set); it has absolutely nothing to do with "paused" input or any notion of "interactivity".

The type of behavior you're concerned with does on the other hand fall squarely in the realm of temporal logics (e.g. TLA+), which do concern themselves with interactivity and indeterminate pauses. For example, the statement "so long as the user eventually inputs the string 'Ralph', the algorithm eventually terminates" is decidable for any finite state machine.

In other words, if you limit your interactivity logic to that expressible by a finite state machine with decidably halting transitions, congratulations, you are writing decidably halting programs, even though they don't "halt" in the temporal sense. It's always possible to tell whether they will halt for a given input by completely analyzing the state space.


Isn't this a question of upper-bound on input? Not specific examples that are less than the upper bound..


So how non-turing completeness work with infinite streams?

Many practical applications (like this) are working with possibly infinite stream of user inputs / requests etc. If we can guarantee that our server, browser or game application just stops eventually we know that something is wrong. However we like to guarantee that our application won't work infinitely with single request / input / time tick. So does this say that we want avoid using turing complete language mostly but turing complete part need to be handled somewhere maybe outside of our code? Something like how Haskell works with side effects. What you think?


Coq is not Turing complete but has support for coinductive data types (= infinite streams) and (co-)recursion over them.

Simplifying a lot, it has a syntactic "guard condition" that says that you must produce some result before you're allowed to make a recursive call. For example, you can map over an infinite stream because a map produces a result for each element of the input stream. Unlike Haskell, you cannot write a fold over an infinite stream because you would need to look at all elements before producing a result.

So if you can structure your system as a transformation from an infinite stream of requests to an infinite stream of responses, you're fine in Coq even though it is not Turing complete.

The intuition is that, just like in Haskell, you don't actually end up doing an infinite computation if only a finite part of the final result is ever requested.


If you have support for coinductive data types then you are Turing complete. Proof: you can simulate a Turing machine. However, Coq's term language is not Turing complete, meaning that every syntactic element in Coq evaluates to a terminating computation.


> So does this say that we want avoid using turing complete language mostly but turing complete part need to be handled somewhere maybe outside of our code? Something like how Haskell works with side effects.

Yes. E.g. you might have most of your logic in a turing-incomplete per-tick or per-request function, and then a single explicitly "unsafe" infinite loop at top level. In a language like Idris this happens naturally - you just have an explicit distinction between total and not-necessarily-total functions.


See my sibling comment. Decidability by definition does not apply to infinite inputs (i.e., it is a nonsensical question to ask, like "what is the square root of potato?"). Some proof languages like Coq implicitly extend decidability to extend to infinite streams with finite representations (think generative regular expressions or state machines).


It won't. It will instead operate over the first googol items and then require a restart.


I miss line numbers


> This kind of program is pretty analogous to basically every program that communicates with the network, user, or some other external system.

Not really; it's quite possible, and often desirable, to do that without potentially infinite loops.


So how do you do that for the server side of things? Only accept a certain number of connections before you have to completely shut down and restart the server? How do you do the restart without the infinite loop somewhere?


Exactly - you want timeouts, exponential backoff, etc. You want to know what happens if a network failure happens - you don't want to retry ad nauseam.


Some of the most reliable programs are those without exponential backoff and which retry infinitely in network failure situations. `ping` is one example.


And yet I'd be completely fine with a ping that was capped to running for no more than a week. YMMV though.

And more to the point, you don't want a program that goes on forever in a setup like ethereum. I guess your rebuttal was called for since the initial point was a little hyperbolic.


How?


The old good O-Machine.


> All useful algorithms are productive.

I'm gonna have to call "not quite" on that one. There's some zeroth-order truth and appeal to that statement, but there are certainly stochastic algorithms that have virtually guaranteed to be awesome, but have a nonzero likelihood, unbounded worst case scenario.


Doesn't any program that processes interactive/network IO require an infinite loop?

Are those programs not useful?


> Doesn't any program that processes interactive/network IO require an infinite loop?

No.

I mean, there are lots of common uses of unbounded loops in that domain, but any of them could be replaced with maximum-bounded loops with sufficient large bounds and be unnoticeably different in practice, mostly cutting off (largely pathological) edge cases.


It seems that a better solution is to guarantee that each step of such an unbounded loop is bounded, and clearly distinguish these types of processes.

For example, a kernel's scheduler should run indefinitely, but each scheduling step should be bounded. Ideally we would specify the scheduler loop as a non-terminating yet "productive" process.


No, they can use interrupts instead.


I don't understand how that avoids looping. If your program doesn't loop waiting for interrupts, it has terminated. So it won't be getting those interrupts.


AIUI smart contracts mean that a ton of nodes has to execute this code, so making the language of the code as weak as possible would seem like an important design goal to me from a langsec PoV.


Decidability doesn't necessarily imply the absence of "optimization deficiencies". Many program analyses make approximations not (only) because of undecidability but simply due to complexity issues. Even in decidable languages it's trivial to write programs with an exponential number of code paths, which forces analyzers to make approximations.


Is there any practical programming language (or dialect, or static checker) that enforces this property? Or is Viper the first attempt at a usable, yet decidable language?


I could think of some examples, but you might disagree on the "practical" part.

If I recall correctly, Pascal-style languages have counting for loops where you cannot modify the loop counter or the loop bound inside the loop. That is, you must use a while or repeat loop or recursion to have non-termination. A Pascal compiler would be trivial to extend with syntactic checks for the absence of such loops and recursion. But I don't think it's been done.

You could go towards more programming flexibility but a higher proof burden and use Spark/Ada (for Ada) or Frama-C (for C), but you would need to annotate loops with variant expressions to help the system prove termination.

Note that if you find languages without general loops and without recursion impractical, you might be disappointed by Viper as well. According to the grammar given on GitHub, Viper only has counting loops and no function calls at all except to a few built-in functions. The latter might just be an oversight, though. Functions are too useful to get rid of completely.


> Bounds and overflow checking, both on array accesses and on arithmetic

I like that one. Anyone know any widely adopted languages that do this?


Rust has support for both of these, though the arithmetic overflows are effectively opt-in. Rust does define exactly what happens in overflow situations, and allows you to use different functions for controlling overflow options in others.


It's a little glib, but Python 3 (and Ruby, I believe) both has checked array access and vacuously has overflow checking given integers can't overflow (by default).


Seems to be at runtime so, so almost every language for bounds checking.

Overflow checking is a bit more rare:

* Swift has it by default (error)

* So do Python, Ruby, Erlang (promotion to arbitrary precision)

* C# has an optional Checked Context though I don't know how common it is

* Rust has checked operations (opt-in, both error and saturating), will check (error) by default in debug mode, it can optionally check (error) in release mode


Idris has compile time bounds checking.

It is a pretty special case though, you have to jump through a number of hoops using dependent types to get it.


> Idris has compile time bounds checking.

Indeed it does, but I'm guessing it doesn't come close to qualifying for "widely adopted language", my list is already stretching it.


Both GCC and Clang have an extension to C that provide __builtin_xxx_overflow methods:

GCC: https://gcc.gnu.org/onlinedocs/gcc/Integer-Overflow-Builtins...

clang: https://clang.llvm.org/docs/LanguageExtensions.html#checked-...


Nim, but you can turn it off if you want


Sorry, I'm confused, does bounds checking mean index out of bounds? If so, most modern languages?


I think the question was more about bounds checking on arithmetic, which most modern languages don't do by default.


Fortran


Swift

C# has the ability to opt-into default checked arithmetic. I don't know of anyone that uses it...


Ada?


Can a blockchain project suffer from scope creep, if so, what would that look like? I have to imagine that you don't have to invent a new language for writing safe code (which in my noob mind, I think this is.) Maybe pick up where some other safety oriented languages already are. If security is the goal, is their path equal to this path: https://en.wikipedia.org/wiki/Formal_verification . I can't imagine so, for they're making a new language?

It could be that they're making it a little safer, and a little easier, but not really too much safer. The features are described, and they look mostly like security?



I think both security and productivity is thought about. Having a domain specific language, built with block chain deployment in mind, is definitely useful.


What are some non-speculative use cases for ethereum that wouldn't be better served by just using aws?


Educational transcripts, patent filings, farmland ownership transactions, small-supply-chain records, lab reports, government transparency reports... Pretty much any records where the total amount of data is fairly small, but you want to share it publicly in a place that's long-lived and extremely tamper-resistant, so others will trust it. Smart contracts (computer programs) can do more, of course.


Patent filings / Farmland ownership transactions: There is a reason these aren't already bearer assets.

Educational transcript / lab reports / gov transp.: How is it an efficiency for this to be on a blockchain. Do we now not trust archive.org?


You didn't get the memo. We don't trust anyone now.


All of those are speculative uses. What are the non-speculative uses?


How do you tie non-physical things to a blockchain? You can't, without trusting some non-blockchain entity.

And if you do end up trusting some non-blockchain entity, then the whole effort is pointless. (You might just as well directly trust them with your patent filings, reports, etc instead).


Non-physical things (i.e. pure information) are the one thing you can tie to a blockchain. Maybe there are enough valuable pure information things for it to work.


Sorry, dumb mistake on my part, I meant non-digital things.


Wouldn't IPFS be a better fit for storing data than the Ethereum blockchain?


One idea I heard was you would just store the IPFS hash address on the Ethereum blockchain, so you can store basically anything in there without blowing up the size.


What about the ETH blockchain is needed to fulfill this requirement? Bitcoin's blockchain can be used in the exact same way.


The ethereum blockchain cures all that ails you. It is an undertaking of great advantage.


Do you think they had non-speculative use cases in mind when creating Ethereum? I don't think so. There are way better ways to create decentralized applications. Without the coins part.


I can't think of one that doesn't involve flouting regulations. Though we have truly awful regulations. Prediction markets are the only reason I am interested.


What inefficiencies are closed by using ethereum for prediction markets?


Your app does not rely on a counterparty anymore. You have access to the blockchain, you have access to the prediction market. Think of all the countries where prediction markets (aka betting) is forbidden.

Of course, this is not exclusive to ethereum, but they seem to be the blockchain which is both highly adopted and the one with the most features (I think they'll be the first to offer a turing complete language for programming smart contracts)


You need an oracle for prediction markets to function. Voting on the truth doesn't work, you will be sybil attacked into oblivion. Is there an oracle present in these contracts? If there is, why wouldn't basic multi-sig work just as well?


Yes, the Oracle problem is the elephant in the room. Augur tries to make it work by creating a new reputation token that accrues to oracles that provide true information (so effectively paying people to be honest), but I'm not sure where development is it - it seems pretty slow.


TruthCoin style oracles look like they might work. I think they are worth trying but wouldn't be surprised if they failed.


Having to pay for taxes, employees, hosting, ddos protection, lawyers, etc.


Company ownership/equity, smart contract based profit sharing to equity holders. I wonder how long that is going to last before regulators step in though


I agree: it seems the only efficiency Ethereum facilitates is to act as the platform to create unregulated securities. I'm sure the SEC loves that.


A service that coordinates interventions with sex workers to try to find people who are sex trafficking victims and get them help.

A service for coordinating resistance against central american drug cartels.

These kinds of things are liable to get you killed if you attach your name to them. Better to just write the recipe and let the network do the administration.


Decentralized DNS.


Why would you want bearer assets for that? What inefficiency is there in the current system that would be solved with more decentralization? Why ethereum instead of something like namecoin or counterparty?


> wei_value: an amount of wei

What is wei? Another name for gas?


It's the name for the smallest unit of ETH value.


10^18 wei = 1 Ether. You need Ether to buy gas.


How ready is this now to write real beta quality contracts in? I'm just starting, should i pick this over solidity ?


for max confusion, there also used to be a programming language called vyper. http://lambda-the-ultimate.org/classic/message1064.html


Ethereum or Ethereum Classic?


> Requires python3

o_O twitch


Does it bother you that it requires Python3 specifically, or just Python at all? (I can empathize with not wanting to use a language that's​ implemented in Python, although if I had to, I'd probably prefer it was Python3)


The latter. It just seems wrong for a language to require python just to install.


Packages currently installed on my system that depend on Python (2 and 3, respectively) and don't have "Python" in their name:

  calibre calibre-bin clang-format-3.8 creduce duplicity gimp gvfs-backends
  inkscape libatk-bridge2.0-dev libatk1.0-dev libatspi2.0-dev libcairo2-dev
  libgail-dev libgdk-pixbuf2.0-dev libglib2.0-dev libgnomecanvas2-dev
  libgtk-3-dev libgtk2.0-dev libgtksourceview-3.0-dev libgtksourceview2.0-dev
  libpango1.0-dev libsmbclient mercurial mercurial-common prosper rubber samba-libs
  texlive-latex-extra texlive-pictures texlive-pstricks virtualbox
  virtualbox-qt virtualenv-clone virtualenvwrapper vlc-plugin-samba

  aisleriot apparmor apport apport-gtk aptdaemon apturl apturl-common
  checkbox-converged checkbox-gui command-not-found compiz compiz-gnome
  dh-python firefox foomatic-db-compressed-ppds gconf2 gedit gir1.2-ibus-1.0
  gnome-menus gnome-orca gnome-software gnome-terminal hplip hplip-data ibus
  ibus-table inkscape language-selector-common language-selector-gnome
  libbonoboui2-0 libgnome-2-0 libgnome2-0 libgnome2-bin libgnome2-common
  libgnomeui-0 libgnomevfs2-0 libgnomevfs2-common libgnomevfs2-extra
  lsb-release nautilus-share onboard onboard-data openprinting-ppds
  plainbox-provider-checkbox plainbox-provider-resource-generic
  plymouth-theme-ubuntu-text printer-driver-foo2zjs
  printer-driver-foo2zjs-common printer-driver-postscript-hp
  printer-driver-ptouch printer-driver-pxljr qml-module-io-thp-pyotherside rhythmbox
  rhythmbox-plugin-zeitgeist rhythmbox-plugins sessioninstaller snap-confine
  snapd software-properties-common software-properties-gtk
  system-config-printer-common system-config-printer-gnome
  system-config-printer-udev totem-plugins ubuntu-core-launcher ubuntu-desktop
  ubuntu-drivers-common ubuntu-minimal ubuntu-release-upgrader-core
  ubuntu-release-upgrader-gtk ubuntu-software ubuntu-standard
  ubuntu-system-service ufw unattended-upgrades unity unity-control-center
  unity-control-center-signon unity-lens-photos unity-scope-calculator
  unity-scope-chromiumbookmarks unity-scope-colourlovers unity-scope-devhelp
  unity-scope-firefoxbookmarks unity-scope-gdrive unity-scope-home
  unity-scope-manpages unity-scope-openclipart unity-scope-texdoc
  unity-scope-tomboy unity-scope-virtualbox unity-scope-yelp
  unity-scope-zotero unity-webapps-common update-manager update-manager-core
  update-notifier update-notifier-common usb-creator-common usb-creator-gtk
Your mileage may vary, but the prospect of a Python-less system doesn't appear tantalizingly close :-)

On the other hand, this language seems to take a subset of Python's syntax and interpret it with different semantics. I find that both theoretically neat and pragmatically smart. Making up a new syntax and writing a reliable parser is boring busywork that takes time better invested elsewhere.


Fair point. I can only answer for myself, but for me it's more of the idea of a language being implemented in Python rather than an issue with having Python installed. I'll admit that I'm a bit of a PL snob, but writing a language in Python just feels inefficient.

Another thing to note is that the vast majority of the packages you list are things that might not be installed on a server environment; I admit I haven't thoroughly read through every single package you listed, but the vast majority of them seem to be GUI-related (Gnome, Unity, etc.) or printing related, none of which would be necessary on a server environment. The only ones that stick out as not fitting into these categories are the Ubuntu-specific things like apparmor and the various apt-related packages (which would presumably not be present on a machine that isn't Ubuntu or some other Debian-based system), clang-format (which wouldn't really necessary on a server, since ideally you'd format your code before pushing), and lsb-release (which may or may not be necessary; on the machine where I'm currently writing this, the only package I have that depends on lsb-release is mongodb, so for servers not having a database (or having a different database), this wouldn't be necessary either.


Not only that, but I assume it needs it to run as well, since it seems to be implemented in Python.


Is Viper Turing complete? If so, is that a mistake? Bitcoin Script specifically is not.


As I understand it, the entire purpose of Ethereum is a blockchain with a Turing-complete scripting/contract language.


nope, it's ICOs


> if a language is decidable it can't be Turing complete

mentioned in a different comment in this thread!



Is there a particular piece of insight from that paper that indicates a mistake in the design of Viper? I love erights' work and don't doubt that there is a lot to be learned from it in this context, but some guidance on how to apply it here would help.


The DAO attack was plan interference. Absorb and grok that idea, and then the rest will hopefully follow.

Specifically, Viper has two glaring flaws, both of which are not suffered by E, Monte, etc. The first is a lack of correct encapsulation. Viper appears to have a Python-style object model, which means that it's not possible to closely hold a value. As a test, how would you build an object with a value in its closure which no other object can access?

Second, Viper doesn't have mitigation for plan interference, which means that it's possible for certain kinds of recurrences to alter your program's security guarantees. As a test, how would you mitigate the DAO attack in Viper?

Finally, I fear that these flaws may be inherent to Ethereum, meaning that any language on Ethereum inherits these flaws structurally and cannot mitigate them. In that case, maybe it's time to design a capability-safe Ethereum!


Indeed, in the (pre-DAO) Ethereum audit report https://github.com/LeastAuthority/ethereum-analyses/blob/mas... they brought up your second issue and recommended Mark Miller's thesis (which covers the material in the OP).

I haven't looked into Viper and how it might address these matters.


Can you give me some context on the paper here?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: