Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Things about programming I learned with Go (mjk.space)
143 points by mjkpl on Aug 22, 2017 | hide | past | favorite | 153 comments


> It’s better to compose than inherit

I know that this is just a restatement of the "composition, not inheritance" mantra in Go, but it still makes about as much sense as "product types, not sum types". A more meaningful statement would be: "use inheritance to express sum types, use composition to express product types." There's no "better" relation between the two concepts, each has its own distinct purpose.

Yes, inheritance has traditionally sometimes been abused to implement a limited form of product type (for example, to work around Java's lack of proper value types), but that's a misunderstanding of what inheritance is used for.

> Go doesn’t have the concept of inheriting structs by design.

Go has automated delegation, and as we know, (automated) delegation IS inheritance [1, 2]. Some implementations of delegation are a bit more limited, some are a bit more expressive, but fundamentally they have the same purpose.

[1] http://dl.acm.org/citation.cfm?id=38820

[2] http://dl.acm.org/citation.cfm?id=900985


'A more meaningful statement would be: "use inheritance to express sum types, use composition to express product types."'

I'm not sure how well that works as a meaningful statement, because it assumes that the reader has a solid grasp of what a sum type and a product type are. If not (and I make the perhaps dubious assumption based on my own experience and knowledge that most programmers, especially most people using mainstream languages like Go or C++ or Python will not) then it doesn't really convey any information, unfortunately. I think a lot more people will know roughly what inheritance and composition are.


> I'm not sure how well that works as a meaningful statement, because it assumes that the reader has a solid grasp of what a sum type and a product type are.

I think you're misparsing what I mean by "meaningful" here, which is simply a sentence or phrase having (more) meaning, not that it's easy or easier to understand. It's not about being more comprehensible, it's about being more "accurate" (I avoided the terms "accurate" and "correct", because they imply an absolute objectivity that you generally don't have when talking about software engineering concepts).


>I'm not sure how well that works as a meaningful statement, because it assumes that the reader has a solid grasp of what a sum type and a product type are.

That's true about all statements: they assume some prior knowledge from the one hearing them. That doesn't make a statement less meaningful -- just more demanding. Whether a statement is meaningful or not is an orthogonal concept (to it requiring prior knowledge).


That's ... sad. Sum types and product types are simultaneously easier to understand and are more useful to know about than inheritance.

Sum types are simply types whose values can be one of several choices - surely that's something a child can reasonably understand!

Product types might be a little harder to grok - they're essentially structs - but surely no harder to understand than inheritance and the whole IS_A/HAS_A mess


No, sum types and product types relate to values, inheritance and composition relate to objects, which include values AND behavior. Inheritance is therefore abused to share behavior. This distinction is important, and relates to some common OOP patterns of popular languages.

In that model, composition of behaviour over inheritance is definitly important to understand, as overriding behaviour gets tricky quickly, and functionality is spread accross many class in the inheritance chain, it becomes difficult to follow and see the full picture.


> No, sum types and product types relate to values, inheritance and composition relate to objects, which include values AND behavior. Inheritance is therefore abused to share behavior

I remember back in high school or early college I was having a conversation with a programmer about when the right time to use inheritance is, and he stated that "inheritance should be used for polymorphism, not just to share code", which to this day I think it's a fairly good heuristic for determining whether or not inheritance is the right tool for a given problem.


Inheritance is not a way to express sum types, it's a form of subtyping. A sum type is like a discriminated union, it can only be one thing at a time. Subtyping allows a value to have multiple (related) types simultaneously, which is much more expressive. I suppose you can use one level of single inheritance to emulate a sum type, but you could just as well emulate it with a discriminated union in Go, e.g. a struct with a type identifier and an interface{} holding the value.


> Inheritance is not a way to express sum types, it's a form of subtyping.

A distinction without a difference. This is probably most visible in languages like Scala and Kotlin, which implement algebraic data types by way of inheritance.

That inheritance creates a subtyping relationship is irrelevant; there's a similar subtyping relationship between variants (or groups of variants) and the overarching type using a traditional sum type notation as in Haskell or ML. This is most clearly visible in OCaml's polymorphic variants [1, 2].

[1] http://caml.inria.fr/pub/docs/manual-ocaml-400/manual006.htm...

[2] https://stackoverflow.com/questions/16773384/why-does-ocaml-...


Pony (https://ponylang.org) uses sum types, perhaps excessively. Just yesterday, I wrote:

    (None | (In, USize))
I.e., None (the null valued type) or a pair made of a type variable (In) and a USize.

The thing is, the values that satisfy this type are not subtypes of None and a pair. (That would be silly, given None.) Such a value is either None, or a pair.


> The thing is, the values that satisfy this type are not subtypes of None and a pair. (That would be silly, given None.) Such a value is either None, or a pair.

Unless I'm misreading you, this seems to be a misunderstanding of what sum types are. A sum type `T = A | B` represents the (disjoint) union of all possible values of `A` and of all possible values of `B`, simply put, not the intersection (as you seem to indicate by the phrasing of "not subtypes of None and pair"; correct me if you meant something else).

Recall what subtyping means (I'm going with Wikipedia's definition here for sake of accessibility):

> [S]ubtyping (also subtype polymorphism or inclusion polymorphism) is a form of type polymorphism in which a subtype is a datatype that is related to another datatype (the supertype) by some notion of substitutability, meaning that program elements, typically subroutines or functions, written to operate on elements of the supertype can also operate on elements of the subtype.

This holds in the case of sum types. Operations that work on the sum type will generally also work on the variants that constitute the sum type.

The same goes for inheritance. If an abstract class T has two concrete subclasses `A` and `B`, then a value of type `T` belongs to the union of values of type `A` and of type `B`.


Not true. You can model sums with inheritance along with an exclusivity constraint, but it's a weird model and subtyping is more general. Further, the idea of each variant being its own type is inherently a subtyping sort of idea. Sums don't give names to their conponents, only distinctions.


> You can model sums with inheritance along with an exclusivity constraint, but it's a weird model and subtyping is more general.

You don't need an exclusivity constraint. Exclusivity is purely a modularity concern; you will still have a finite number of variants in any given software system; traditional ADTs and exclusivity just limit the declaration of variants to a single module. See also "open sum types" vs. "closed sum types", because it can be beneficial to have extensible sum types [1]. Not all sum types are closed; see polymorphic variants and extensible variants in OCaml.

Also, do not confuse the language mechanism used to specify a type with the type itself.

I do agree that inheritance is a generalization of algebraic data types.

> Further, the idea of each variant being its own type is inherently a subtyping sort of idea. Sums don't give names to their conponents, only distinctions.

Try polymorphic variants in OCaml (mentioned above); or GADTs:

  # type _ t = Int: int -> int t | String: string -> string t;;
  type _ t = Int : int -> int t | String : string -> string t
  # Int 0;;
  - : int t = Int 0
  # String "";;
  - : string t = String ""
There's nothing inherent about summands not having a distinct declared type in ML and Haskell, only convention. Obviously, they do have distinct actual types.

Edit: A practical use case is the representation of nodes for an abstract syntax tree. An `Ast` type can benefit from having abstract `Expr`, `Statement`, `Declaration`, `Type`, etc. subtypes that group the respective variants together in order to get proper exhaustiveness checks, for example.

[1] See the question of how to type exceptions in Standard ML; in OCaml, this led to the generalization of exception types to extensible variants.


I'm aware of polymorphic variants and row types and the like. My concern is one of modularity in that I consider a running system pretty dead, extensions during coding are where language features and their logics are interesting. Closing your sums is valuable to consumers: they have complete induction principles, e.g.

Open sums and row types are a little different in that they represent a fixed/closed type but retain enough information to extend it more conveniently and to see it as structurally related to other (open) sums/products. This is no doubt super useful, but I see it more as an application sitting atop polymorphism rather than a fundamental concept.

Finally, I am exactly confusing the language mechanism with the type it intends to model because exactly now we have to think about things as a mechanism and model. This is where breakdowns occur.

Anyway, I doubt there's a real difference of opinion here. I'm very familiar with the concepts you're discussing, but perhaps argue that they are not as fundamental as regular, closed sums/products and language support for those simplest building blocks is important.


I'm finding that wrapping an interface in a struct can be a good technique. However, the interface{} contains a type identifier, so adding another one seems like wasted space. Usually you can compute it using a type switch.


Yeah, that's a perfectly valid solution as well. Depending on the application it may be faster to do the assertion on your type identifier rather than introspecting the type though. And having a list of type options more directly maps to a true sum type. I probably wouldn't actually do it in real Go code if I could avoid it though


If you find yourself not knowing something apparently important, maybe it's worth spending half an hour with Wikipedia and the like to gain the understanding?

Among other things, the idea of a sum type helps understand the nature of the "billion dollar mistake" which is the inclusion of null in C, and why it pops up in other languages, and what more civilized methods of handling it might be. It will help you as a mainstream programmer, too.


I did look at wikipedia, yes, which gave me a page of type theory related stuff without any apparent grounding in practicalities, which gives the conclusion that it's apparently unimportant unless you particularly like mathematical theory. Yes, I could go and research functional programming languages and type theory; but my point is that if you're critiquing a blog post on a non-functional programming language then doing it in terms that only FP advocates will understand is missing the target audience.


A sum type `T = A | B` means that a value of type `T` can be either of type `A` or of type `B`. Such a type is used to express polymorphism.

A product type (from "Cartesian product", i.e tuples) `T = A * B` means that a value of type `T` has a component that is of type `A` and another component that is of type `B`. It is used to aggregate parts into a whole.

> Yes, I could go and research functional programming languages and type theory

It has absolutely nothing to do with functional programming and touches only upon the barest essentials of type theory (and calling it type theory is already stretching it, because it's just about defining a couple of common computer science concepts).

Sum and product types are fundamental computer science vocabulary to such an extent that it's not really possible to have a useful discussion about programming language semantics without them.


>A sum type `T = A | B` means that a value of type `T` can be either of type `A` or of type `B`. Such a type is used to express polymorphism.

Taking this example (in English / pseudocode):

Define a class Animal.

Define Dog as a subclass of Animal.

Define Cat as a subclass of Animal.

Case 1) Now if we have a variable a1 that can, at runtime, contain (or refer to) either a Dog or an Animal instance.

Case 2) And if we have a variable a2 that can, at runtime, contain (or refer to) either a Dog or a Cat instance.

Referring to your quoted sentences above, would you say that Case 1, Case 2 or both, are about sum types?

Just trying to understand the terminology.


I'm thinking you just violated the Liskov substitution principle (https://en.wikipedia.org/wiki/Liskov_substitution_principle).

A sum type is the same thing as a tagged union, a variant record, or a discriminated union (https://en.wikipedia.org/wiki/Tagged_union).


I'll have to look at that, thanks.


I'm not sure if you can really express it in OO. Sum types are used to represent a closed set of variants. Inheritance like that isn't closed.

In languages that have both OO and ML/Haskell-style type systems like Scala, the fact that the set is closed is denoted by the keyword 'sealed'.

example

sealed trait Color

final case object Red extends Color

final case object Green extends Color

Color can only be Red OR Green. In your example you can have an Animal, a Dog, a Cat, and other things can inherit Animal and create more variants.


>Color can only be Red OR Green. In your example you can have an Animal, a Dog, a Cat, and other things can inherit Animal and create more variants.

True. I used traditional OO style inheritance in my example, and my question was about understanding the relationship between it and sum types, based on rbehrends' comment that I was replying to. Interesting, did not know this. So you are saying that sum types are sort of like inheritance with some limits - pre-defined types only. Sort of like an enum (though those do not have inheritance in them) but for classes or types.


> So you are saying that sum types are sort of like inheritance with some limits

Traditionally sum types don't have inheritance, you have the sum type is a type and the variants (or constructors) are values of that type, but yes you could also model it as a "closed inheritance" hierarchy, in fact some languages do exactly that: Scala with sealed traits and case classes (the trait is the sum type, each class is a variant) and Kotlin with sealed classes.

The boundaries can also get fuzzy the other way around, OCaml has polymorphic variants and extensible variants which blur the line. Polymorphic variants are structural types where multiple sum types share the same value (polymorphic variant types are venn diagrams essentially), not entirely unlike integral types, and extensible variants (as their name denotes) can get cases added to them by users of the base package/type.

> Sort of like an enum

Sum types can be seen as an extension of enums yes, Rust and Swift call them that, in both languages they're enums where each "value" (variant) can carry different data sets (unlike e.g. Java enums where variants are instances of the enum class). Here's a "degenerate" C-style enum with Rust:

    enum Color { Blue, Red, Green }
and here's one with associated data:

    enum Color {
        RGB(u8, u8, u8),
        HSL(Hue, Saturation, Lightness),
        CMYK(u8, u8, u8, u8),
    }


Good info, thanks.


>final case object Red extends Color

On a side note, that syntax seems a little counter-intuitive - the extends word seems to indicate that Red is a subclass of Color, but from what you said seems more like Red and Green are values for Color. So it seems like an enum would be more appropriate here (for this use of Red, Green and Color)?


A Scala “object” declaration declares a singleton; it's an object, but you can also define object-specific class features.

If the instances need behavior (which in practice they often do), this is useful. Presumably, in the example, they would in a substantive program, but the behavior is irrelevant to the illustration and thus omitted.


Interesting, thanks.


Both. Assuming that you can actually exclude instances of class Cat in case 1 and of Animal in case 2, that is.

In case 1, the actual type of a1 would be Animal | Dog. In case 2, the actual type of a2 would be Dog | Cat. (Either a1 or a2 might be declared with a different type, this is about the values that they can actually hold, according to your stated premises.)


>Both. Assuming that you can actually exclude instances of class Cat in case 1 and of Animal in case 2, that is.

Good point. I was actually thinking in terms of Python OOP, and in that, you cannot exclude those instances (at least not without some extra code).


Animal is a sum type. It's definition is:

    Animal = Cat | Dog
I personally didn't like the comparison, because inheritance is more powerful (what is not always good), and because algebraic types takes most of their usefulness by merging sums and products on the same type.


What languages are you familiar with? It might help with explaining the concepts.

Since we're on a Go thread, I'll point out that Go's structs and tuples are both examples of product types.


Most familiar with Python currently. Done some Ruby and Java and C and Pascal earlier. Some D and a bit of C++ and a bit of Go. I do understand that structs and tuples are examples of product types (because the range of the values for a struct or tuple is the Cartesian product of all the possible values for each field). My question was mainly about sum types as described by rbehrends, was trying to relate them in my mind (somewhat, if it makes sense) to traditional inheritance as in Python, Java or C++.


Cool, so unions in C and C++ are a kind of sum type because the variable can have one of a set of types. More usually people think of tagged unions when they think of sum types, I believe Pascal's Variant Records are an example of tagged unions.


Thanks. Yes, I was thinking on the same lines. Don't remember whether Pascal also has this right now (more time since I used it than C), but in C, being the somewhat more flexible language that it is, IIRC you can also store a value of one type (out of the types of the sum type) in the union, and then read it back out as one of the other types. E.g. define a union with an int (say 16-bit) and a two-char (or two-byte) array, and then write into it as an int, and read it back out as two chars or two bytes. There are uses for such things, though I can't think of a good example of the top of my head. Okay, one use might be hardware interfacing, another might be conversion between big-endian and little-endian data values if you need to roll your own for some reason, I know there are libs for that).


Type theory isn't FP as much as logic--it applies everywhere. That said, totally agree that you're unlikely to have encountered it outside of an FP context in 2017.

Product and sum type are fundamental structures for construction of information. A product of two pieces of information is a piece of information equivalent to having both at once: it's sort of like "and". A sum of two pieces of information is a piece of information equivalent to having exactly one or the other and knowing which of the two you have: it's sort of like "xor".


Remember when you first encountered some seemingly hard topic early in your developer's life. Pointers? Virtual functions? Futures? Pick something from your experience on what you had to spend a couple of weeks to get a grasp.

Now think:

    * Was it useful?
    * Is it hard, from your current perspective?
Ponder.


A practical example is a function that can return either a value or an error. Those are two different types that are combined into a Sum type called a Result. When you write some code that handles results, you need to account for both possible outcomes or you'll get a compiler error (or a panic).

A product type is basically an object where you have a person type which in includes attributes like 'Name', 'Age', etc.


What wikipedia page did you look at? This has plenty of practicality

https://en.m.wikipedia.org/wiki/Sum_type



"but that's a misunderstanding of what inheritance is used for"

I'm not sure what you imply. Most OOP languages where people tell you not to use inheritance, but composition instead, so java, c++, c#, etc. In those languages inheritance can be used to create forms of product types, but also to share and override behaviour hierarchically. They can also create sum types, and all possibility of hybrids, like weird mix of sum and product types, partially closed, etc.

By allowing inheritance to do all this, I say it doesn't matter what the designer of these languages intended inheritance to be used for, the truth is that it allows for much more, and so best practices have been put in place to help programmers not use the construct in troublesome ways. One of those is to use composition instead.

Can you simulate closed sum types with inheritance, yes, but they are not the same thing clearly, since closed sum types can not emulate all usage of inheritance (the kind of java).

Maybe you're right, a closed sum type pattern could be created and evangelized, having an abtract class with no fields and a set of methods, then having a one level inheritance hierarchy where each subclass has its own disjoint set of fields, and overrides all methods to work on its fields. But I already feel like in practice this sounds like a nightmare. Too much good intentions are needed to maintain this, its too easy to create a degenerate case of it.


> I'm not sure what you imply. Most OOP languages where people tell you not to use inheritance, but composition instead, so java, c++, c#, etc. In those languages inheritance can be used to create forms of product types, but also to share and override behaviour hierarchically. They can also create sum types, and all possibility of hybrids, like weird mix of sum and product types, partially closed, etc.

My point wasn't to give an exhaustive list of use cases for inheritance (which would require a small essay); I was pointing out that "composition over inheritance" is a nonsensical statement, just as (say) "loops over modules" would be, as it's a qualitative comparison of orthogonal concepts.


Okay, but its not, not from the perspective the best practice comes from. The most common use cases for object inheritance can be delivered with object composition instead. This is much more like loop vs recursion.


The dominant use case for inheritance is polymorphism, which composition cannot do.


You'd use an interface for polymorphism, much better.

That said, I have to thank you for your suggestion of doing sum types with inheritance. I hadn't thought of it, and the use case presented itself at work yesterday. So I learned something new, thanks. It worked well, an abstract class with shared fields, and a derived class for every type in my sum type with distinct fields added to them. Now a variable of the abstract type is effectively constrained to one of the set of its derived children. Limit this to one level and you've got a pretty nice simulated sum type. Just need to remember to handle all cases when working with it. It worked like a charm, wouldn't have thought of it without your comment.


> You'd use an interface for polymorphism, much better.

Interface inheritance is simply the special case of inhering a purely abstract class. I've never seen a good argument why restricting abstract classes to purely abstract methods is worthwhile and several that speak against it [1, 2]. Languages that separate implementation and interface inheritance (such as Sather) have been tried, but never caught on, because it just leads to a lot of code duplication in order to write the interface twice. It can be useful to have inferred interfaces (as in Dart or OCaml), but there's nothing inherently better about using interfaces over more general abstract classes.

A common use case is to represent types of the form `T * A | T * B`, which (without implementation inheritance) just leads to code duplication for `T` or extra destructuring efforts (if you turn it into a representation of form `T * (A | B)`).

[1] Example 1: it gets in the way of doing Design by Contract as part of the interface of a class, even if you want interface-only inheritance.

[2] Example 2: Abstract classes with significant implementation parts show up all the time in design patterns. (Note that this is not about whether design patterns are good or bad, just that they reflect observed common practice).


What you gain by following more constrained constructs is knowing you can't accidentally use it for something else then what you needed it for.

Take or. You don't need it to be a language construct, the more general conditionals can do it: if y doX else if z doX. While that's more general and thus more powerful, its less expressive. When you mean or use or, less chance of doing it wrong and the intent is clearer and unambiguous to other readers.

Can you define your notation here, I don't follow it. Why do you want a type T * A | T * B ? This I doubt is the real use case, that's already your solution to a use case. Give me a concrete example.


> Go has automated delegation, and as we know, (automated) delegation IS inheritance [1, 2]. Some implementations of delegation are a bit more limited, some are a bit more expressive, but fundamentally they have the same purpose.

No it is not inheritance, and Go doesn't have automated delegation, it has type embedding.

Given struct A, if a function requires A, you can't pass any type B that embeds A, you must pass A.

    type A struct {
        Foo int
    }

    type B struct {
        A
    }
func acceptA(a A}{} // you can't pass B here


> No it is not inheritance, and Go doesn't have automated delegation, it has type embedding.

Different names for the same thing.

> func acceptA(a A}{} // you can't pass B here

This says that the function isn't polymorphic in its argument, not that the types aren't polymorphic. Function resolution that is not polymorphic is not limited to Go, but occurs in inheritance-based languages, too. Example in OCaml:

  class a = object method foo = 0 end
  class b = object inherit a method bar = 1 end

  let f (x: a) = ()
  let () = f (new b)
You will get an error that the type of `new b` (= `b`) is not compatible with `a`, because they're not identical, even though `b` is a subclass of `a`.

If you replace the declaration of `f` with:

  let f (x: #a) = ()
it'll work, because `#a` denotes a polymorphic type, matching `a` or any subclass of `a` (same as though you'd specify an interface in Go [1]). You can also cast the type explicitly to `a` to work around the error.

[1] Like Go, OCaml uses structural subtyping.


> Different names for the same thing.

No, different names for completely different concepts.

> This says that the function isn't polymorphic in its argument, not that the types aren't polymorphic. Function resolution that is not polymorphic is not limited to Go, but occurs in inheritance-based languages, too. Example in OCaml:

Struct types in Go are not polymorphic in anyway period, the only way to achieve polymorphism in Go is through interfaces which are not concrete types, unlike classes.

> [1] Like Go, OCaml uses structural subtyping.

No it doesn't. There is no subtyping in Go. There is only type conversion and type assertion.

Whatever you wrote with OCaml is completely irrelevant to the discussion as the type systems are fundamentally different. but let's pretend it is.

It's interesting that you didn't bother try writing the equivalent of `let f (x: #a) = ()` in Go, because you CANNOT. An interface IS NOT a substitute for OCaml inheritance, as the later is more precise and specialized.

So no, Go doesn't support inheritance at its core. That's a false assertion. Go interfaces do not give a damn about what the actual implementation is, unlike OCaml sub classes.


> There is no subtyping in Go.

There is. Go simply has structural subtyping [1] rather than nominal subtyping.

> It's interesting that you didn't bother try writing the equivalent of `let f (x: #a) = ()` in Go, because you CANNOT.

You forget that OCaml also uses structural subtyping. Writing `#a` is effectively the shorthand for the inferred interface. So you can write it also as:

  let f (x: < foo: int; .. > ) = ()
where

  < foo: int; .. >
is the interface of any class implementing at least a method `foo` of type `int`, i.e. what you'd write as

  interface {
    foo() int
  }
in Go. You just don't in practice, because `#a` is both more convenient and readable.

And the corresponding Go function would be:

  func f(x interface { foo() int }) {
  }
[1] https://en.wikipedia.org/wiki/Structural_type_system


> There is. Go simply has structural subtyping [1] rather than nominal subtyping.

No there is not, period.

  interface {
    foo() int
  }
is not sub typing. but it's interesting how you move the goal post on each comment. You go from inheritance to sub typing to "structural subtyping". You're not interested in a serious discussion.


> No there is not, period.

This is an assertion, not an argument. It's not how the literature sees it. Plus, you can even have it from Rob Pike himself if you don't believe me: https://twitter.com/rob_pike/status/546973312543227904

> is not sub typing. but it's interesting how you move the goal post on each comment.

I didn't say that this piece of code constituted subtyping. Here, I was refuting your specific claim that the equivalent of `f` cannot be written in Go.

> You go from inheritance to sub typing to "structural subtyping".

This is not how the thread went. I added a reference to structural subtyping as a purely explanatory footnote to illustrate a well-known similarity between OCaml and Go; subtyping had not been mentioned at all so far. Starting at that footnote, you introduced a digression by claiming that Go does not have subtyping at all.


> This is an assertion, not an argument. It's not how the literature sees it. Plus, you can even have it from Rob Pike himself if you don't believe me: https://twitter.com/rob_pike/status/546973312543227904

The hell with your "assertion". Structural typing =/= structural subtyping, just like the presence of classes in a language in no way means that language supports inheritance.

> I didn't say that this piece of code constituted subtyping. I was refuting your specific claim that `f` cannot be written in Go.

It cannot since you had to add an interface to the mix, unlike with your OCaml example, which proves that go structs and OCaml classes are not similar in any way. Go structs do not support any kind of polymorphism unlike OCaml classes.


> Structural typing =/= structural subtyping, just like the presence of classes in a language in no way means that language supports inheritance.

Well, structural typing is used for two things, type equivalence and subtyping [1]. If it isn't subtyping, then the consequence would be that Rob Pike talked about type identity only, which doesn't make sense.

Let's pull in the definition of subtyping again:

> [S]ubtyping (also subtype polymorphism or inclusion polymorphism) is a form of type polymorphism in which a subtype is a datatype that is related to another datatype (the supertype) by some notion of substitutability, meaning that program elements, typically subroutines or functions, written to operate on elements of the supertype can also operate on elements of the subtype.

Now, one misunderstanding seems to be that you assume that I'm saying that B is or should be a subtype of A. Note that I never actually said that and it's not relevant; neither inheritance nor delegation necessarily create a subtype relationship. See "Inheritance is not subtyping" by Cook et al., a seminal paper in programming language theory [2].

Structural vs. nominative or nominal subtyping revolves around the definition of substitutability. With nominative subtyping, we have the case where a subtype relationship is constructed based on explicitly specified relationships between types identified by their names; structural subtyping exists when the type signature of the subtype conforms to the type signature of the supertype.

The latter is the case in Go. Somewhat simplified, the type of a struct (strictly speaking, of a pointer to an instance of that struct [3]) is the subtype of the type of any interface it matches.

> It cannot since you had to add an interface to the mix, unlike with your OCaml example, which proves that go structs and OCaml classes are not similar in any way.

This is because `#a` is a short-hand for "the interface of `a`". So, yes, the OCaml example does the same thing. You can reproduce it by removing the inheritance from OCaml, and it'll still work (or, without the `#`, still won't):

  class a = object method foo = 0 end
  class b = object method foo = 0 method bar = 1 end

  let f (x: #a) = ()
  let () = f (new b)
Note that the classes are completely independent; there is no inheritance at all. It's just that the interface of `b` conforms to the interface of `a`, and hence instances of `b` can be passed to functions expecting `#a`.

[1] E.g. http://wiki.c2.com/?NominativeAndStructuralTyping

[2] http://dl.acm.org/citation.cfm?id=96721

[3] C++ and many other OO languages make a similar distinction, as runtime polymorphism does not work well for value types.


What affordances does Go even have for real composition though?


I'm no Go expert, so the following may just come from having not used it enough.

The two things that shocked me out of continuing with Go were exception handling and package management.

Exception handling is basically not implemented. Instead the Go developers dediced to add half a step from return codes towards exceptions and work with that. And now the community seems to have decided to just reraise everything that comes their way. Nobodoy seems to see that such a simple basic feature shouldn't require a piece of code, it should be done automatically by the language. And the community doesn't seem to see why it's nearly impossible to debug. I really hope that Go developers have some secret tools I just don't know about, because there are no stack traces, and on the abstraction level of a user getting an error from an underlying framework printed out doesn't really help at all, especially when the code seems to continue after printing the error (making it a bug, by reporting a warning as an error). My favorite example is kubernetes helm reporting a connection error in my job's infrastructure, every time you use it, but after that error it actually switches from IPv6 to IPv4 and just works. But still it feels good about reporting that error. wtf.

The next thing is package management. I had high hopes here after going through the trouble of working on that topic in 2012 when the Python community was developing its package management. I mean if you don't add that on top of an existing world, but create it from scratch, there is a chance to just do it right and not have to bother with all the pains of legacy systems, right? Well, Go decided it doesn't need versioning on imports. Just put the Github link there, don't even choose a branch. How can any code ever get finished that way? My best guess is that now they have to develop stuff on top of that already-born-as-legacy system and also try to integrate that with their core. Sad.

I really hope someone can add some corrections to this view.


The only time I miss exceptions is when I really need non-local return. Stack traces are available, see [1]. Needing non-local return really turns out to be the exception though (sorry). I find I want it when writing recursive descent parsers, and some other deeply nested control structures. In those cases, the panic/recover mechanism can be helpful. But for everything else, returning an error (using a standard error interface) works just fine.

As far as package versioning, what we ended up doing is vendoring everything into our source-control system. This fits our use better all around. For our production systems, we don't want inequality bounds on our library versions. We have tested our system with version X, we only want to ship with version X, not, X.1 or X.2. Vendoring allows us to pick a specific version. Admittedly, this makes more sense in the Go world, where executables tend do be statically compiled.

Vendoring makes it trivial to track local changes to code in our dependencies, and ship them upstream when appropriate. It makes new developer setup simpler. Finally, the Go toolchain has support for vendored code that makes the setup relatively painless. And near instant compilation means that you don't have to wait very long for your dependencies to compile.

[1] https://golang.org/pkg/runtime/debug/#PrintStack


Thanks for providing additional information. What do you do about the user of your tool having another point of view than you? For instance look at the helm example I provided. You may start a network call to IPv6 first, and when it fails (with an error, which is correct) you continue with IPv4. If either one of these works for your user there is no error. At best there is a warning scenario where you want to educate him about finally switching to IPv6. But if you report a connection error in almost any scenario that would be a bug in your code, because the connection didn't error. You did a test which failed which was expected by your code and handled correctly.


I'm about 8 months in to using Go for a few largish projects and I'd say these are probably the two biggest things I still struggle a bit with. (not generics as others seem to obsess about)

On errors, I'm really of two minds. In a way, it is a lot like how Java started with checked exceptions, it forced you to deal with the error. But at some point most people decided that was annoying and switched to runtime exceptions for everything, which while requiring a lot less code, still led to errors often bubbling up all the way to the user.

I think checked is the right thing, but it does require developers to be thoughtful and not just throw errors upwards. If you accept that checked is the route you want to go, I don't find Go's use of error values worse than exceptions.

On the package management front, I think the current best practice for projects that must not break is to vendor your dependencies. Go is working towards having better tools to allow you to specify SHA's for your dependencies easily but we aren't fully there yet.

While it took a bit to wrap my head around using `govend` to vendor my dependencies, in the end it really hasn't ended up being a big pain point in practice. I also never have to worry about a dependency either disappearing or pulling a trick of shipping the same version # with different code. (or having the repo/package manager be down, which anybody who has shipped a lot of code will tell you has happened)

So yes, I agree these are both, weird, but they aren't deal breakers. For our particular application, I really love Go, more so than I have any other language in recent memory.


> not generics as others seem to obsess about

One reason people obsess about generics is specifically because of error handling. With generics, you could implement Result and Option types, which make error handling significantly more sane.


Personally I loathe this style of programming. It's not that it's difficult, it just seems to obscure code a great deal.

Writing this sort of thing in Rust:

    fun some_function(a: &A) -> Result<&B, SomeError> {
        let c = foo(a)?;
        let d = foobar(a, c)?;
        Ok(if xfoo(c) {
            let e = blah()?;
            bar(d, e)?
        } else {
            baz(d)?
        })
    }
where you have to write every function in this pseudo-do-notation where 'return' is just wrapping the return expression in 'Ok' and `a <- expr` becomes `let a = expr?;` is just horrible.

I'd much rather write this:

    fun some_function(a: &A) -> &B throws SomeError {
        let c = foo(a);
        let d = foobar(a, c);
        if xfoo(c) {
            let e = blah();
            bar(d, e)
        } else {
            baz(d)
        }
     }
See how that's so much cleaner? It's not actually any different from exceptions anyway, you're basically using them like exceptions, and they're implemented in the same way. The difference is that in the latter the code is much simpler and easier to understand. That's all.

In fact, that syntax could be added to Rust (after 6-12 months of bikeshedding as usual) and just have it automatically translated to the above anyway.

The other issue with Result/Option is that people start doing really horrible things like adding Option::map. Sorry but it's not a container that has 0 or 1 things in it. It's an optional value. That they're mathematically equivalent doesn't mean that they're the same thing conceptually. It's as bad as pretending that Result<T, Err> is useless and everyone only needs Either<L, R> where by convention R is the error value. God please just no.


> See how that's so much cleaner?

No, I don't. I look at the former snippet and I can easily tell each and every function invocation that can cause SomeError. In your theoretical style, I have no idea whether foo, foobar, xfoo, bla, bar and/or baz will throw that error. I prefer explicit over implicit since I find it far more readable.

> really horrible things like adding Option::map

You can quibble about the names (Option and map), but Option is essentially the Maybe monad and map is bind, so you're kinda arguing against core functional programing constructs.


>No, I don't. I look at the former snippet and I can easily tell each and every function invocation that can cause SomeError.

The reason that functions have type signatures is that you can read them. You can tell which functions can cause SomeError by going and reading their definitions.

>I prefer explicit over implicit since I find it far more readable.

'Explicit over implicit' is dogma. Rust requires you to annotate your code with gibberish in cases where it is not necessary.

>You can quibble about the names (Option and map), but Option is essentially the Maybe monad and map is bind, so you're kinda arguing against core functional programing constructs.

That's literally my entire point. The attitude that it's technically a Functor so it makes sense for it to be called map? No, it doesn't. It's not a map. You're not mapping over anything. Naming is important.

Calling it 'the Maybe monad' shows that you actually have no idea what you are talking about. It's not 'the Maybe monad'. The Maybe monad is the instance of Monad for Maybe. It is not Maybe itself.

The entire concept of having the literal 'Monad' word as a word in your language, a thing that you use in programming, is very stupid. Monad is not a useful or good abstraction. Maybe is a good abstraction. Or Optional, or Option, or whatever you decide to call it. But Monad is a bad abstraction. Abstracting over superficial syntactic similarities between completely different constructs is completely stupid.

The name being terrible is not 'quibbling' by the way. Naming is incredibly important. Calling it 'map' just shows how out of touch Rust is with real programmers.


You'd need generics and algebraic types to implement Result/Option


False. That feature allows a more efficient implementation, but (T, error) (or an equivalent struct) can be reasoned about in much the same way as Result<T>. You don't need a tagged union when you can use the nil-ness of one of the two values in the tuple as the tag. Similarly, Option<T> is just a wrapper around a nullable T.


That would let people do this, and I hear people ask for it a lot. I dunno if it'd actually be better without the possiblity of not-wow-slow combinators.

Golang has all these weird performance pitfalls that you only hit if you contort the language too hard. Naming combinatior functions sometimes trips those conditions.

I'm of the opinion that even just pattern matching and the kind of nil type propagation checking that TypeScript does could help enourmously.

Generics won't fix error handling without compiler help, and the best way to get that help is to introduce pattern matching as a forcing function.


The type you propose can inhabit both different variants at once, the entire point of Option is that it is either something or not.


Checked exceptions suck. Every method has to explicitly throw them up to a higher level where they can be handled causing tons of useless boilerplate. Its much better to just let unchecked exception bubble to a higher level of the app. This is a best practice in java so letting the exception bubble all the way back to the user is just poor programming.


I have to google what the "checked" actually means. But in Java you can throw parent classes, in Python you don't need to declare them at all. And honestly having a bad implementation is still better than no implementation. The main point here is to get correct, helpful error reporting so people understand what's going on. If I spend another 4 hours debugging an error message that was actually just a warning and not a biggy at all I'm going to throw up.


That's exactly what you have to do in Rust anyway though.

Except instead of just writing `throws SomeError` in the declaration line of the function, you have to annotate virtually every line of your function with `?` and wrap the return line in `Ok(...)`.


The ? RFC did include a "catch" construct, and there's been some discussion about doing something more like "throws SomeError." We'll see!


It's far too late for "we'll see".


Package management is being worked on as part of the hopefully-to-become official 'dep' tool, which will prefer semantically versioned releases over pulling trunk. Things might be good here in 12-18 months. Its been acknowledged as a problem by the language developers.

Error handling is still exactly where you remember. Most of your code is still 'if err := p.something(); err != nil { return fmt.Errorf("I need to annotate the error so I know where this happened: %v", err) }'

Yes, you still tend to get regurgitated and poorly applied rationalizations if you bring up limitations. However, the language designers have been soliciting use cases and requests for improvements for Go 2.0 via their wiki. It does involve educating and convincing the community, which is a hard road and likely only to get submissions from the echo chamber, so I'm not holding out much hope beyond something like generics; it would be a losing battle to pitch error handling changes to true believers. Time will tell if this is a good approach or not, as opposed to messier changes like you see in something like Python.


Package management is currently been worked and the tool[1] is stable now, at least the user facing parts. Here's where things stand as of a few days ago[2] There has been efforts to use dep in kubernetes[3][4]

1. https://github.com/golang/dep

2. https://sdboyer.io/dep-status/2017-08-17/

3. https://github.com/golang/dep/issues/110

4.https://github.com/golang/dep/issues?q=is%3Aissue+kubernetes...


The go dep tool has been worked on and promoted by some of the Go committers over the past year or so, but we don't really know if the Go Language designers with the clout to change the toolset have been sold on the idea yet, or if it will be vetoed later on. Some of them haven't said anything in support of it (or against it), so go dep is still really just of proof of concept.


>especially when the code seems to continue after printing the error (making it a bug, by reporting a warning as an error).

Maybe I'm misunderstanding something, but if you call a function and it returns an error, it's your responsibility to ensure that you handle it. Generally you'd assume any data returned by the same call that returned the error is useless (except stuff like EOF, in some cases?).


Yes, exactly. That's what one would assume. I don't understand why a whole language community decided to not handle any errors and instead just bubble them up without filtering, without adding more notifications about the context like log messages, without trying to recover. Why would they expect a user to know their whole dependency tree and all their error messages.


I am not aware that they did. Most good Go code does not do this. That there exist some badly written Go programs does not contradict this. Go functions that can fail, should return an error as part of the function returns. Like with exceptions, you can decide to ignore any error or properly handle it. Even with checked exceptions, you can have an empty "catch", which means you are just ignoring it. So the difference is rather syntactical. And like with exceptions, proper error handling is entirely the responsibility of the programmer who writes the error handling code. The Go version of error handling requires less typing overall, and makes it a little bit easier to ignore errors. Unhanded exceptions on the other side can unwind the stack arbitrarily high up until they do eventually find a handler - but it is not said, that this handler can properly handle the exception, as it might be separated to far from the error source.


As far as I know bubbling up errors instead of handling them is recommended practice: https://github.com/nats-io/go-nats-streaming/issues/143#issu...


>The Go version of error handling requires less typing overall

Do you mean as compared to using exceptions in a language like Python or Java? If so, can you give an example of code that does the same task, in both types of languages, to make this more clear? Thanks.


I was mostly thinking of Java. Ignoring an exception would be done with try {....} catch (Exception e) {}, while in Go, you would do: x,_ := f(...). And if you want to check a returned error I think the if err!=nil {...} is still a bit less typing than the Java version, and you don't have to declare checked exceptions you might throw.


In Go, if you call three functions you need to repeat the

  x, err := do_something()
  if err != nil {
          return nil, err
  }
ceremony three times. In Java it happens by default, because the language designers agreed this is by far the most common case.

Ignoring an error is almost always a serious mistake, so the fact that Go makes it easy and not blatant is not a good thing.


Yes, I certainly was not advocating ignoring any error. Which is, why exceptions are of dubious use. If you only check exceptions around a larger block of function calls, or every several levels of the call stack, then you might miss the exact source of the exception, unless the exception type is unambiguous. That is, why I find the Go version tedious for sure, but still better than to wrap single function calls into try... catch.


>ceremony three times. In Java it happens by default,

What happens by default? Not clear. (I know Java, but not up to date with recent versions.)

>Ignoring an error is almost always a serious mistake

Agreed.


In Java, an exception outside a catch block automatically stops the method and raises the exception to the caller, and then their caller, and so on. In Go you have to write this after every function call (except the tiny minority that can only panic).


Thanks. I did know that about Java (having used it), just that it was not clear to me from your earlier wording, that that is what you were saying. NP.


Got it now.


Google C++ style guide states that you should not use exceptions https://google.github.io/styleguide/cppguide.html#Exceptions golang was probably designed by people following those same guidelines ...


If you want a stack trace with your errors you can use errors.Wrap()[1]. This repo should replace the stdlib errors package.

1. https://github.com/pkg/errors/blob/master/errors.go#L180


Thanks! Can this also be added to already compiled tools somehow? E.g. kubernetes uses a lot of go components, but of course I only receive them in compiled form.


Don't want to hi-jack the article, but we've built Bugfender using Go. The problem is, we thought this is an internal (experimental) project and so Go (as a new language) would be fun to try out. But then Bugfender started to take off and we ran into serious problems making Go scale. Not because it is a bad language, but simply we were new to it ourselves.

Today we're still running on go and we're more or less "ok" with it now, but it was a difficult path to get there and while it's fun to use a new language (or new anything) it's probably not the best choice for a startup product. Also when you need to hire people.

We've summarized our experiences here:

- https://bugfender.com/blog/one-year-using-go/

- https://bugfender.com/blog/go-pros-cons-using-go-programming...

- https://bugfender.com/blog/three-years-bugfender-9-5m-users/


I would have thought Go is easy to hire people for. It's such a simple language and has such a well-designed standard library you can pick it up extremely quickly.

I think "anyone can learn it" is one of its main benefits. Compare that to Rust or Haskell for example...


There's a lot of languages that can make that claim, and lots of developers that would pick up a new C-like language in a week or so. The challenge is that companies that picked Go or $uncommon_language need to provide the space and training opportunity.

Go may be easy to learn, but mastering it is a thing on its own, just like the other languages.


Go's spec is much smaller than most mainstream languages. Seriously, it's a weekend read to understand the entire language specification (and it's actually readable)


Mastering a language is not the same as reading the entire spec, or even internalizing that spec. Scheme or Forth or Io are all probably smaller than Go. Perhaps even smalltalk is (the Smalltalk 80 spec is larger, but contains some of the library, tutorials, examples, and the entire spec for compiler and the VM).

If you really want an extreme example language with the tiniest spec, you can always take Brainfuck. The entire spec probably fits in a few paragraphs, but it's not easy to master.


My comment mentioned mainstream languages. To my knowledge, Brainfuck is not a mainstream language used at any small, medium, or large corporation.

The point about the spec was the language is small enough to master.

Scheme (Lisps in general) is different because it is homoionic. Forth is not mainstream, and neither is it's paradigm. And well, I never heard of Io so I will check it out, looks neat.


And your point was refuted. Brainfuck not being mainstream is irrelevant; either a short language spec correlates with speed and easiness of mastery or it doesn't. And the counter-evidence shows that it doesn't.

You can't just casually ignore inconvenient exceptions to your beliefs. I mean, you can, but it's not something that reflects positively.


The go board game has an extraordinarily simple rule set. On the other hand, it is an exceedingly difficult game to gain proficiency in.

A simple spec does not equate to a simple language. I would argue the opposite: a simple spec often means a large amount of hard work is punted on and left to the end-user.


Like it's numeric tower?


I think the issue is finding people that want to learn it. Systems programmers are turned off because of it's limitations and your average pythonista or rubyist isn't interested in mucking around in type definitions or even thinking about concurrency.


> Systems programmers are turned off because of it's limitations and your average pythonista or rubyist isn't interested in mucking around in type definitions or even thinking about concurrency

Systems programmers were the bulk of Go's early adopters, even before it hit its first stable release.

I can't speak for Ruby, but as for Python - I've been writing Go professionally for five years, and my first talk on Go was actually at the New York Python Meetup. They specifically asked me to speak about Go because they had significant interest from their members in learning or using Go. This was back when Go was still on its 1.0 release.

Soon after that, Python added type hinting and concurrency to address needs that Python programmers had. That was enough for some, but there clearly has been interest from Python programmers in Go's approach, and there continues to be; Python is one of the more common language backgrounds for new Go programmers that I see.

Furthermore, there's no shortage of experienced developers who already know Go. While the numbers are evening out a bit as more companies have begun to adopt Go, there are still more experienced Go programmers looking for jobs where they can write Go full-time than the other way around. (And that's not even looking at people who are inexperienced programmers or experienced programmers who don't know Go but might be interested in learning.)

Go may not be for everyone, and that's fine, but there's really no shortage of good programmers who can write Go. If a company is evaluating languages and considering Go, availability of talent is a selling point, if anything, not a concern.


My subjective impression (supported by informal surveys [1]) is that most Gophers come from a Python or Node.js background, doing mostly network code, DevOps automation or CLI tooling. Rubyists are less likely to fall in love with Go, given that the huge philosophical gap, though the pragmatist among them adapt go for what it excels at (performant microservices and statically linked binaries).

Still, most newcomers to Go seem to come from dynamic typing background, and it shows with all the buzz about Go being 'strongly-typed' and 'helping you to detect typing errors', which sounds crazy to anyone who played with C++ or C#, let alone an ML-family functional language.

And yes, Go also has a small but prominent contingent of C developers, and while some of them also do systems programming, this is generally not what they do in Go. This crowd seems to be mostly focused on tools and networking-oriented code, two areas where Go excels. The thorough standard library and top notch static linking support make go a hard to beat choice for this type of work, but its a less obvious choice for traditional systems programming, which usually still eschews garbage collection.

You'll rarely find Go being used in most of the traditional systems loads: drivers, kernels, high-spec graphic engines, emulators, JIT compilers, filesystems, browsers and definitely not in real-time programs. The one traditional systems realm where Go does see some activities is lightweight databases and key-value stores (BoltDB, TiDB), but databases performance is often dependent on concurrency no less than it depends on memory, and in fact Java has been long popular for NoSQL databases (HBase and Cassandra), and even way more niche languages like Erlang gave rise to popular NoSQL solutions (Riak, CouchDB).

[1] https://blog.golang.org/survey2016-results


Despite the obvious "Hey this has a GC!" Go is actually surprisingly good as a systems language. You can do pointery stuff, assembly if you need, etc. I've used it for user-space device drivers on Linux (you can easily use it for ioctls and other low level stuff).

I bet you could even write a kernel in Go. I think the only places where it really couldn't go (ha) are microcontrollers.

Edit: yep: https://github.com/jjyr/bootgo


I don't think your last point is entirely accurate. I've programmed in Python since the mid-90s and Ruby since 2005, and developers in both camps want concurrency. I think this is why Elixir has really taken off amongst Ruby developers. It is similar enough to Ruby that they don't feel lost, but it makes concurrency extremely easy; a lot more so than Go does, in my opinion.

My reasons for dismissing Go are mainly due to the way it handles memory. Concurrency is handled in Go using shared memory, as opposed to Elixir's private memory model, and Go's threads are not guaranteed to politely take turns the way Elixir's are. I also think Go's packaging system is quite flawed.

I also dislike Go's syntax, but that is just a matter of personal taste.


I checked out those blogs and none of them really go into any depth about scaling issues. Do you guys plan to share your learnings there, I for one would find that interesting.


> 1. It is possible to have both dynamic-like syntax and static safety IHMO: This convenience causes more harm than good in the long term. Note: My experience is that this is a minefield of bugs and defeats the point of type safety. Also, because Go lacks generics, there's a lot of boiler plate and interface being used as function argument (just check large open source repos on github and you will notice).

> 2. It’s better to compose than inherit IHMO: It's better to have both tools available. Note: Inheritance, like static types, offer a more rigid structure and helps on large project that will exist in the long term. In a language that does not offer something like traits, "composition of behavior" ends up being a hack. Of course, we also have the tendency of blaming the language for the fault of the programmers. I would think the example of the Vehicle is not the best to explain the paradigm of behavior composition.

> 3. Channels and goroutines are powerful way to solve problems involving concurrency IHMO: Powerful, yes. But I would use actors instead. Note: Same problem: as your business problem gets complex, goroutines start to become a pain and the code will become unreadable and littered with hacks.

> 4. Don’t communicate by sharing memory, share memory by communicating. I read other comments here that express what I think better than I can articulate Note: You still have to "synchronize" if you are working with shared resources. There's no magic solution.

> 5. There is nothing exceptional in exceptions Oh, Boy! This causes more wars than "space vs tab". IMHO: Good in theory. Too rigid in practice. Note: While works for small stuff, the the lack of exceptions becomes a problem in large projects, causing people to just ignore "for now" (and usually, forever).

In conclusion: The "cool features" of Go might teach some people about programming, but take a toll on more complex projects. I would use Go for small system utilities and tools but would never touch business logic with it.


Hi, author here. Thanks for your opinion.

> 1. On a daily basis I work in a 200k LOC Ruby project and to me from that perspective lack of static typing is a minefield ;)

> 2. Since I have finally learned how to do proper composition (~a year ago) I haven't use the inheritance even once. Of course, you may say that my project is special, but I can't help feeling that inheritance is often overused.

> 3. Yes, I've also come across opinions that Go's channels are too low level to be used in a large commercial project. But still, as a concept I find them interesting.

> 4. True. But, you can have one goroutine that'll just "guard" that resource and communicate with it from many places using one shared channel.

> 5. TBH I've seen more flame wars about the "space vs tab" thing ;) As I mentioned in the article I don't think that's the best error handling pattern ever invented, but I just like the concept of treating errors as regular return values. IMO it's good to have it at the back of your head, regardless of the language you use.


"There is nothing exceptional in exceptions"

No there's not, but it's a royal pain the arse to have to keep passing them up through your function calls to the level that actually cares about them and will do something about them. Try/catch eliminates boilerplate.


And even with exceptions, C++ is the only mainstream language I know that has a story how to undo any investments so far, including those that are not memory allocations.

In any other language, you have to at least write wrappers that execute commands and catch any exceptions to do specific cleanup actions (like aborting a database transaction).

And in C++, while it has a story how to do that, implementations must implement their own state to decide whether the reason for quitting is success or an exception. (this is more elegant in Haskell, which can do this with sum types, and for example monads on top).


I don't really agree that it's inelegant to decide whether the reason for quitting is a success or an exception, or whether it's necessary. Like, you've either committed, in which case you shouldn't need to do any cleanup, or you haven't committed, in which case you need to clean up, right?

If you have an object representing some sort of transaction, and you just have actually-do-all-the-work-at-once-on-commit-being-called semantics, you don't actually need to do anything in your destructor at all, right?

    template<typename T>
    class transaction {
        std::vector<T> things_to_do;
    public:
        void add_work_item(T t) {
            things_to_do.push_back(t);
        }
        void commit() {
            for (auto t : things_to_do) {
                t.do();
            }
        }
    };


You are just making my point with that OOP mess (sorry). You are not using RAII to finish (meaning cleanup or rollback) the transaction.

What we want actually executed in the end (in terms of control flow - not necessarily what we would be willing to code), is something like the following clean procedural code.

    do_some_foo():
        start_transaction()
        r = do_thing_A()
        if r == CONFLICT:
            rollback()
            return r
        r = do_thing_B()
        if r == CONFLICT:
            rollback()
            return r
        commit()
        return OK


There's nothing remotely object-oriented about what I wrote. I don't see any inheritance. I don't see any polymorphism. I don't see any virtual member functions.

And of course I'm using RAII to roll back the transaction. When the object is destroyed the vector is destroyed and the transaction isn't run.

It doesn't make any sense at all, quite frankly, to use RAII to do what you wrote there. That's not what RAII is for, nor is it what RAII is good at. RAII is for managing resources.


Doesn't Go have any syntactic sugar for pass errors out of function? Something like Rust's Carrier/From traits and try! or `?`.


No, go "errors" are just a convention, there is absolutely nothing special about them.

Go has panic/defer, which are exceptions, but done badly. They were probably retrofitted in the language when its creators realized they needed exceptions anyway, which makes the whole "error as value" a bit hypocritical. If they truly cared about error as value, then yes, Go would support some form of try! macro.


Yes, Rust would be seriously irritating without try! (and its short friend the question mark operator). So I feel the pain of gophers here.


Not really, Go community tends to be very vocal against syntactic sugar.


Minor nitpicks, from someone who really likes Go:

> It is possible to have both dynamic-like syntax and static safety

Well, you can learn that one by coding in C#, too. In fact, I have the feeling this is a general trend across several relatively popular languages these days - provide as much as possible of the benefits of dynamic typing while keeping the benefits of static typing.

I sometimes think how nice it would be if I could write my code entirely without type declarations and have the compiler or some preprocessor figure out as much of the type information as possible.

> It’s better to compose than inherit

It depends, really. Personally, I think composition is the simpler solution more often than inheritance, but sometimes it is not.

I completely agree about the error handling, though - at first it was very tedious to handle all errors explicitly, but after I while I came to appreciate it. Once I had fallen into the habit of checking for errors without having to think about it too much, detecting errors became much easier, and deciding if I could "deal" with some error or escalate it (possibly to the point of terminating my program) became more straightforward, too.


In my experience, I've found that inheritance becomes a significant burden on projects, especially when using 3rd party libraries.

If you need to modify something up in a base object in a 3rd party library, you essentially have to fork the project creating a new maintenance burden and breaking the upgrade path OR rebuild the entire inheritance tree.

It's one of the things that makes Ruby so useful as an object oriented language since I can write a patch that runs at startup to just monkey patch the base object in a couple of lines of code.

I think the compos-ability approach gets it right in that regard.


> In my experience, I've found that inheritance becomes a significant burden on projects, especially when using 3rd party libraries.

I think inheritance is like any sufficiently powerful programming technique - with great power comes great potential for shooting yourself (and others) in the foot.

But there are situations where inheritance is an elegant and natural approach. They just are not very frequent.

Python's standard library for example has (or used to have at least, it's been a while since I looked) a framework for building network servers where you create a server by writing a class that inherits from two classes provided by the framework - one for the type of socket you'll be dealing with (TCP, UDP, Unix sockets), and one for defining how you want to do concurrency (forking, threading). The you just override one method that implements the actual request handler, and you're good to go. I consider that a clever and elegant use of inheritance.

Just to be clear, all in all, I tend to agree with you, and in my own code I use inheritance only very rarely. But I think that it's wrong to all-out condemn a programming technique just because it can be abused. It just means one has to carefully consider the advantages and disadvantages.


Oh, I'm definitely not condemning it. Honestly, I think it makes more sense when you are dealing with a GUI or desktop application where an object can be more representative of elements that a user is interacting with.

For server side projects is where I've experienced it causing problems over project lifetimes. The maintainability complexity is where I've been bitten the worst.


I disagree that this is elegant.

    import socket
    import threading
    import socketserver

    class ThreadedTCPRequestHandler(socketserver.BaseRequestHandler):

      def handle(self):
        data = str(self.request.recv(1024), 'ascii')
        cur_thread = threading.current_thread()
        response = bytes("{}: {}".format(cur_thread.name, data), 'ascii')
        self.request.sendall(response)

    class ThreadedTCPServer(socketserver.ThreadingMixIn, socketserver.TCPServer):
      pass

    def client(ip, port, message):
      with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as sock:
        sock.connect((ip, port))
        sock.sendall(bytes(message, 'ascii'))
        response = str(sock.recv(1024), 'ascii')
        print("Received: {}".format(response))

    if __name__ == "__main__":
      # Port 0 means to select an arbitrary unused port
      HOST, PORT = "localhost", 0

      server = ThreadedTCPServer((HOST, PORT), ThreadedTCPRequestHandler)
      with server:
        ip, port = server.server_address

        # Start a thread with the server -- that thread will then start one
        # more thread for each request
        server_thread = threading.Thread(target=server.serve_forever)
        # Exit the server thread when the main thread terminates
        server_thread.daemon = True
        server_thread.start()
        print("Server loop running in thread:", server_thread.name)

        client(ip, port, "Hello World 1")
        client(ip, port, "Hello World 2")
        client(ip, port, "Hello World 3")

        server.shutdown()
Why not this:

    def handle(request):
      data = str(request.recv(1024), 'ascii')
      cur_thread = threading.current_thread()
      response = bytes("{}: {}".format(cur_thread.name, data), 'ascii')
      request.sendall(response)
and this:

    options = socketserver.ServerOptions(
       concurrency=socketserver.Threading,
       sockets=socketserver.TcpSockets)
    server = socketserver.Server((HOST, PORT), options)
There's really not any use of inheritance here anyway: it's just a hack. It's such a hack that you need to inherit those two classes (ThreadingMixin and TCPServer) in that order, because one overrides a method of the other, and if you inherit them in the other order it just doesn't work.

It might simplify the implementation, I don't know, but it definitely doesn't simplify the interface.


> In fact, I have the feeling this is a general trend across several relatively popular languages these days - provide as much as possible of the benefits of dynamic typing while keeping the benefits of static typing.

Some languages go the opposite way. Dart 1.0 has optional typing, but Dart 2.0 will be statically typed (with type inference though).


> Dart 1.0 has optional typing, but Dart 2.0 will be statically typed (with type inference though).

Not quite; it's more that compile time typing is getting cleaned up. You can still omit types, which will then either be inferred or set to `dynamic`. The following is valid strong mode Dart:

  f(n) {
    if (n <= 1)
      return 1;
    else
      return n * f(n - 1);
  }

The following will no longer work:

  int x = "foo";
In short, you can still omit types. It's just that if you declare them, they are enforced [1]. Dart will also infer them if possible, i.e. the following is illegal:

  var x = 1;
  x = "foo";
Here, x is inferred to be `int`, which makes the assignment of a string illegal. However, the following works:

  var x = true ? 0 : [];
  x = "foo";
Here, the type of `x` cannot be inferred, so it becomes `dynamic`, and therefore the assignment of "foo" becomes valid.

You can turn this off with --no-implicit-dynamic; with this option, all types must either be declared or have to be inferable.

[1] Sometimes not at compile time, though: Dart allows implicit downcasts and covariant generics at compile time and will insert runtime checks to catch those situations.


> with type inference though

That is what I was trying to get at - with type inference, one can omit many of the type declarations (thus getting more flexibility and shorter code) without sacrificing type safety.

Whether or not it is a good idea to omit type declaration is a different question, but in cases like "T a = new T(...)", the type declaration on the variable is kind of redundant anyway.


I came to really dislike embedding interfaces and types, but I know I'm a minority opinion, on this.

When I came to go from OO (ruby), and before I learnt to use embeddings, something blew my mind : I could always say where a function was coming from, and which functions were available in the current scope. No need to grep anymore to find from which parent class or which included module a method was coming from.

And after I started using embedding all over the place, I realized that problem was back again. Go made me love the idea of "simplicity, not easiness" and the fact that I can get what code was doing immediately, without wondering where parts were coming from. Today, I prefer to avoid embedding altogether. I use function parameters instead of types, and this also has the advantage of avoiding initialization problems (and if I omit a parameter, I'm immediately warned by compiler).


> Don’t communicate by sharing memory, share memory by communicating.

Well, if you're using immutable data-structures, then structural sharing can be very useful. You can then pass large data-structures in constant time, and of course you can "modify" them efficiently using immutable techniques.

As long as you don't perform any writes in a shared data-structure, the memory hierarchy should be perfectly happy and no delays or locks should be necessary.

Of course, as you start scaling across multiple machines, there will be different trade-offs, as there will not be a shared memory bus.


You're not "communicating by sharing memory" though, at least in the idiom's sense (in which "memory" is an alias for "mutable state"), the structural sharing is just an optimisation.

Which may not even be desirable: despite being built entirely upon immutable data structures, Erlang only share large binaries between processes (AFAIK even the new maps are copied despite being HAMT) to ensure process heaps and thus garbage collections are completely independent.


> Erlang only shares large binaries between processes to ensure process heaps and thus garbage collections are completely independent.

That sounds unnecessarily restrictive. At least they should give developers a choice (e.g., "this process should receive integral data, and should not be disturbed by a GC cycle of other processes").

Also, concurrent (not stop-the-world) GC techniques could make this problem moot.


> Also, concurrent (not stop-the-world) GC techniques could make this problem moot.

It would mostly introduce insane additional complexity. Erlang GC works per-process (each process has its own private heap and stack) and you'd normally create lots of small processes, so the GC is concurrent as an emergent effect of the system construction.

Not to mention processes can be distributed across nodes for which your scheme completely breaks down, what's supposed to happen if you ask for memory sharing across the network?


> It would mostly introduce insane additional complexity.

GoLang has a concurrent garbage collector.

> what's supposed to happen if you ask for memory sharing across the network

Yeap, I mentioned that. Again, as a developer you want the choice. You don't want your language telling you "sorry, that's too complicated for your brain, so you can't do that".


> GoLang has a concurrent garbage collector.

Golang also has a single, shared, mutable heap, it does not have tens or hundreds of thousands of individual heaps.

> Yeap, I mentioned that.

No, you did not.

> Again, as a developer you want the choice.

Er… no?


> Authors of Go wanted to give users more flexibility by allowing them to add their logic to any structs they like. Even to the ones they’re not authors of (like some external libraries).

Except that you can't. "You "cannot define new methods on non-local type[s],". You have to compose them. That makes the struct function definition syntax a bit moot in my opinion.


I've never thought of that syntax as conveying a particular statement. They could also have done

  func *Receiver.method(arg int)
instead of

  func (r *Receiver) method(arg int)
The advantage is that you get to name the receiver instead of having to use a keyword name like `this` or `self`. I've come to like this design decision.


The advantage is that you get to name the receiver instead of having to use a keyword name like `this` or `self`. I've come to like this design decision.

self, at least if you're talking about Python, is not a keyword, it's just a convention. You can use whatever your want.


It's not just python that uses self. Rust, for example, it's sugar for self: Self


He speaks about non-local receivers.

You can't have:

func (r *otherPackage.Receiver) method(arg int)


What's the issue with composing a new type that has otherPackage.Receiver in it, and defining the new method on the new type?


For me there is no issue.


I'm guessing: you still can't access unexported members (private vs Public).


The question could be asked the opposite: why is the new type necessary?


Because if you allow non-local method declarations, you have a ton of stuff to think about:

- Are these methods exported?

- If yes, how do you import them? Explicitly or implicitly?

- Given a method call, how can the developer tell where it was defined? How can she know if the same method call is available when using the same library in some other project? Remember that not everyone uses IDEs.

- etc.

I can see why the Go designers decided that it's just not worth it. I've seen how you turn a language into a mess with non-local method declarations (cough Ruby cough).


Yes, you're right. I was somehow convinced that this is possible. Which is BTW not a good idea anyway (or at least I really don't like similar concept in Ruby - monkey patching).

Do you know why Go authors decided to put method definitions outside types definitions? Technically I don't see anything to stop the syntax from supporting the otherwise.


"Error theater" and zero initialized values, while understandable due to other design decisions, are the biggest source of frustration whenever I have to work on go code.

Off topic from the article, I think, but what go has taught me about programming is that some parts of our industry are stuck in time and intellectually stagnant.

Go is a better C. I would prefer Go over Python. But really, when the state of the art (and production ready!) is light years ahead of Go, it frustrates me to no end to see colleagues spending so much time on things in their day to day that are solved problems in other languages.


This sort of attitude is never going to win over people that you are talking about. I hate to bring politics into things, but it's like calling all Trump supporters hopeless and backwards. Yeah maybe they are, but they're going to react to that by never listening to anything you say again, so even though it's true, it's not helpful.

The 'state of the art' is increasing, over-the-top complexity to a truly ridiculous level. Go programmers don't want to open up the documentation for a library and see this:

http://i.imgur.com/ALlbPRa.png

or look at the language reference and see this:

http://en.cppreference.com/w/cpp/language/constraints

and I don't blame them.


> Thanks to goroutines and channels Go programmers can take a different approach. Instead of using locks to control access to a shared resource, they can simply use channels to pass around its pointer. Then only a goroutine that holds the pointer can use it and make modifications to the shared structure.

How does this prevent data races if more than one goroutine holds a pointer to the shard structure and they're running concurrently?


As others have said, the scenario you describe wouldn't prevent data races at all.

Under the channels model, you wouldn't want more than one goroutine to be accessing the shared structure. One routine would hold the reference to the resource, and other routines talk with the first routine via channels (i.e. share memory by communicating).


Yeah, I think you'd still want mutex to handle that. I always use a mutex when dealing with goroutines and maps, for example.


It doesn't. I think goroutines make concurrency easier that it is in, say, Java, but Go still passes around state and that is where a lot of the issues with concurrency arise. Concurrency is much easier to deal with in functional languages, where data is transformed via chains of functions, rather than stored in state.


The difference is that in Go, it is unidiomatic to share mutable state across threads, while in C/C++/Java it's not.


It doesn't.


It does not.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: