Sadly a lot of people won't even try clojure since it is dynamic typed. I see their point but nevertheless clojure does something really well here. As the author obserserves designing around some core data structures results in high code reuse. A library like spec is also way better in encoding business requirements than all the mainstream language typesystems e.g. a number in business context has mostly constraints like a particular range etc.
I really looking forward to see someone come up with a static type system inspired by clojure's approach.
PHP and JavaScript and Python are also dynamically typed the solution isn't yet another incompatible type system, there are so many type systems that don't work together, spec is also turing complete
Turing completeness in a type system allows you to use the type system as a meta coding language which is a fantastic amount of
string to hoist yourself with, putting meta languages inside languages is a miserable place to be
If you want types at your system boundaries like on your API then specs or malli is great for that
A great thing that Rich Hickey always advocated is separating out problems and then solving them individually I think modern type systems try and take on too many problems at once, static analysis tools are great because they're uncoupled by time and are laser focused on detecting mistakes only
Can we have a REPL-driven language that's statically typed?
My hope is yes, in that it's just that the work hasn't been put in yet to create to create the equivalent of Typescript for Clojure or Lua that compiles down to the actual, extensible language.
I always wish that it would become unnecessary to have to choose between stability and extensibility when selecting a programming language. Having a Clojure with static types or a typed Lua (Teal) would put us so much closer to that goal.
There's always Common Lisp using the popular SBCL implementation. It's not statically typed but you still get compiler warnings about types at compile time -- even cooler is that SBCL uses the types for assembly code optimization, so it can also guide you to write faster code with type warnings, e.g. about places where it couldn't infer the type and is forced to use a generic addition.
There's also a handful of attempts to bring stricter type checking guarantees on top as a library, but I haven't experimented with those. I'm very much in the crowd that doesn't find static types very valuable (apart from aiding performance) even for multi-million-line projects, but if I can get a few non-strict warnings like SBCL gives, that's a nice bonus, much like running a concurrent linter for extra warnings in <any lang>.
I liked Clojure a lot, but fell out of love once I realized how much it's still missing from the Common Lisp it was inspired from (non-bonkers OOP, condition system, and reader macros being the biggest to me), and also how its pervasive laziness is something I'd rather opt in to than vice versa. Some people feel the same about its immutability-by-default structures, I don't mind either way. (e.g. I can opt in to them in CL, that's fine, and Clojure's ways of opting out of them aren't bad either.)
As I said elsewhere in this thread, Haskell has a good REPL dev experience. Personally, I am a much better Lisp programmer (used Lisp languages since around 1978) than Haskell. My journey learning Haskell is almost a decade long, and still I am no where near as productive as using Common Lisp. I find that Clojure is usually a high productivity language, but I get stuck sometimes on platform details and that gets me out of a flow programming experience.
If you want to try Haskell, work though a few tutorials and if it looks good, you can grab a free copy of my Haskell book https://markwatson.com/opencontent/haskell-cookbook.pdf There are much better Haskell books than mine though, my book just reflects my own learning process.
Depends on what you mean by "repl-driven". If you mean a statically-typed language with a decent repl, then yes, you can have that--for example there are Haskell and ML implementations that give it to you.
If you mean "repl-driven" in a stronger sense, in the sense of a livecoding repl-driven environment that supports building programs by interactively modifying them as they run, then about the only place you find full-featured support for that style of programming is in old-fashioned Lisp and Smalltalk systems. Maybe also in Factor, and arguably in FORTH (though with fewer conveniences).
There's no reason in principle that you can't have a full-featured repl-driven environment for a statically-typed language; I just don't know of any. That's not a big surprise, though. There aren't all that many of them for dynamically-typed languages, either. It takes a lot of work to build one, and the builders pretty much need to know up front what it is they're trying to build, which means they probably need to have seen one before. Most programmers haven't.
Static types aren't particularly an obstacle, but strong immutability is. A full-featured repl-driven livecoding environment wants the programmer to be able to inspect and change anything and everything in the live environment as it runs. Soft immutability is fine; it's okay if you have to say "Mother, may I?" before changing something. Hard immutability is a problem, though. If it's actually _impossible_ to change something, that's incompatible with the nature of a repl-driven livecoding environment. A programmer accustomed to livecoding environments will experience that impossibility as a bug in the environment.
By process of elimination, because the type of the variable is `(U string? (List-of string?))`, the type checker can prove that if `(string? x)~ be false, the type of `x` must be `list?`, and that `(length x)` succeeds.
Scala for example is statically typed and had a REPL from day 1.
However, when using a language with a great typesystem, the need for a REPL is just much less. You mostly don't need to try things out because the types guide you. Therefore these languages might have a REPL and it might be useful, but REPL-driven is not really necessary.
> I always wish that it would become unnecessary to have to choose between stability and extensibility
Low language complexity - not much more than EDN and everything is an expression, plus a few extras like destructuring. Certainly simpler than any other language I have used.
Extensibility - again, one of Clojure's strong points with macros.
As for "Type systems removing the value of a repl" again, I disagree. Repl driven development is as much about exploring the problem than it is writing code.
Stability in this context means "can make changes with confidence in the absence of tests". The more confident you can be, the more stable. Maybe not the right word though.
> Repl driven development is as much about exploring the problem than it is writing code.
Which is exactly what types give you. This even coined the term "type driven development" (same as "test driven development" on purpose").
> Stability in this context means "can make changes with confidence in the absence of tests". The more confident you can be, the more stable. Maybe not the right word though.
Maybe what I should have said was something along the lines of an "iterative language." Essentially I was thinking of a language where you can reload the functions/types in a single module and see the results instantly without restarting. The shortening of the feedback loop is the important part. And also important is being able to take some code and put it in a REPL to find out what it does or modify it until it's just right.
One language which is explicitly designed to have first-class hotloading support and static types is Mun. It's not production ready, but it was born out of frustration over the dynamically typed nature of Lua while appreciating the benefits it brought to extensibility and rapid prototyping.
For me Lua has the iterative part, but no types. The iterative part makes LÖVE an attractive target for gamejam developers, who don't want to be constrained by compile-restart cycles when they don't yet understand what every aspect of the finished game will look like.
I don't think it's about it's dynamicism but more about it being functional. Plenty of super popular dynamic languages out there.
I think that's also what keeps elixir from becoming something more mainstream, most people come from OOP and are used to thinking about programming that way.
You're both wrong. If it was dynamicism then why are JS, Python, and Ruby so popular? If it's about being functional then why has Scala got more users?
Languages are driven by the platform. There is no Clojure platform that people want to use, so no one uses Clojure.
If a language isn't bound to it's own platform, it can share a platform and displace other tools like python, go, and rust do with C and C++ (docker is go; docker-compose, dnf is python; etc).
Scala has carved out pat of the jvm platform (spark, kafka). Clojure has not.
A common onramp is command line tools, but Hello world in Clojure takes 670ms to run. This is a total non-starter.
time clj -M hello.clj
Hello world
clj -M hello.clj 1.05s user 0.12s system 175% cpu 0.672 total
Without an on-ramp to take over a platform, Clojure will not gain traction. Language quality is not a significant driver in adoption ; that's why shonky R has so many users. They will suffer a great deal to use dataframes and ggplot2.
If Clojure is so great, where are these cathedrals that people have made that should make it a no brainer to pick up Clojure? Around what are we circling the wagons?
Clojure has reached sustainable traction. There are large companies using it. There are shops that work with it exclusively. There's no need to conquer the world ;) If some users are evangelizing too loudly, well, it happens everywhere. You can choose to ignore it. And if you are curious why you can check it out and decide for yourself. No random person on HN can convince you.
>it's about being functional then why has Scala got more users?
Scala world is not that functional if you measure it by real-life usage at least. Most of Scala code I've seen was mostly OOP sprinkled with functional tricks.
That's actually not correct at all. Hello World is slow because you have to start up the JVM. Once it's started it's rather fast. Your web application has the same JVM startup cost but not a lot more than that.
Both of you miss the most obvious difference between Clojure and all mainstream languages: it's a lisp.
I've had so many programmers look at code I write and proclaim: "wow that looks impossible because of the parenthesis" and that would never touch anything like it because it seems so different.
Some people do take the time to learn how it works, but many just have a knee-jerk reaction to it and then forget about the language itself.
I have a theory: it's because Blurp progammers learn to equate parentheses with complex (mathematical) expressions. The view of many parentheses at once must trigger an acceleration of their heart pulse.
What makes it different is that syntactic constructs are also expressed with the same syntax and there is no special syntax, so control flow constructs use a syntax similar to function calls.
This is particularly exuberant in Arc where what in many languages would be:
if (<cond1>) {
<then expr1>
} else if (<cond2>) {
<then expr2>
} else if (<cond3>) {
<then expr3>
} else {
<else expr>
}
Note the complete lack of syntax beyond a keyword having 7 arguments in order, which many find nonconsecutive to reading, and also error prone as accidentally making a typo can completely change the meaning of the program.
It is thus that most Lisps have somewhat more redundant syntax:
Personally, I wouldn't mind that a mandatory `->` be required in between the conditions and the expression to further guard against accidental typos. I find that redundancy in syntax guards against accidental mistakes, though I have nothing against the parentheses and in fact favor them.
Clojure specifically doesn't have the bracket wrapper around condition-expression pairs but syntax highlighting/ formatting considers it and aligns stuff nicely. I don't think, it is a problem that it basically is a list or function call with arguments. I do think, adding `->` just like that would be a problem because it would be very inconsistent with the syntax of Clojure.
Well yes. Clojure in general tends to have less special syntax and in general is designed very consistently. What I find very useful is the threading macro ->> and -> which basically is a pipeline:
Often, you can remove some parenthesis in the first case as well.
I really love this. It is also very easy to just wrap it in a let or a function. This alone should open the eyes of anybody, who has written at least some Shell script somewhere or does some data analysis. If it doesn't, maybe the person isn't actually that great of a thinker or a practitioner and you would be better off around other people professionally. Frankly, who are the people, who cannot grok moving the opening parenthesis before the method/ function? I don't think I have ever met anybody like that - I only read/ hear about such people in comments or at conferences and I haven't heard a name yet.
I don’t think it’s the parents, but the s-expression themselves.
LISPs forces you to maintain the stack for the parse tree in your head, something humans aren’t that great at — s-expressions are the programming language equivalent of center embedding, which is quite alien for human languages (the depth is three at most: compare that to your favorite lisp program)
If you've seen 300+ line react components marching off the right of the screen, you'll know that maintaining the parse tree in your head isn't a barrier to popularity.
While this made me laugh a bit, I think there's a meaningful difference between using "tree-like" syntax for all your code (lisp & S-expressions) vs declarative UI descriptions (JSX).
A deeply nested syntax is beneficial for UI work because you can correlate the structure of the code with the interface/document being rendered. S-expressions for HTML/UI in the form of Hiccup-style templates are equally good (if not better) for the same reason.
In JSX however, there is a clear syntactic distinction between behaviour (C-like JavaScript syntax) and interface descriptions (HTML-like element constructors). In Lisps, the uniformity makes it harder to quickly distinguish "behaviour" from "data", which is kind of the point, but comes with a trade-off in readability.
> In Lisps, the uniformity makes it harder to quickly distinguish "behaviour" from "data", which is kind of the point, but comes with a trade-off in readability.
Maybe in theory, but in practice it's easy to tell because in the case of hiccup, the data is data (as vectors) and behavior is behavior (as function calls), those have different syntax in Clojure.
Lisps are excellent at manipulating trees, which is exactly what HTML and the DOM is.
You might be right in the first part, and the "fear" of s-expressions are only expressed as a fear of parenthesis.
But on the second part I think that's the same as for most languages. You end up with nested scopes at the same degree as any c-like language really. But most lisp programmers tend to break out into new functions a bit earlier than let's say JS programmers. I'd argue that normally you'd have to keep track of less depth in a normal Clojure program than you would in a JS program, simply because of how a programmer usually works in those languages.
That's true if you write everything on one line. But how could you fail to notice that most Lisp code is written on multiple lines and indented? The structure is visually laid out, so as not to be maintained in anyone's head.
Line breaks only shorten lines, they do not change the reading direction. Yes, you can start at the bottom and read upwards, but that's unnatural for most.
Compare:
something.first().second().third()
With:
(third
(second
(first
(something))))
The end from read must you to understand, and it gets more complicated as your code does. No wonder Clojure's threading macro is so popular, as it would allow you to write it as:
(-> something first second third)
Fun fact: Lisp was never supposed to be written with S-expressions. They were intermediate representation, for bootstrapping. McCarthy designed M-expression, with function notation, inflix, and sugar'd cond and list; but all that was omitted due to lack of time, and we were left with S-exps.
> Lisp was never supposed to be written with S-expressions. They were intermediate representation, for bootstrapping.
That's not the complete picture.
The early Lisp manual had a definition for Lisp syntax. The Lisp syntax was based on M-expressions for code and S-expressions for data.
Basically what now is
(append (quote (1 2 3))
(quote (a b c)))
was
append[(1,2,3);(A,B,C)]
where the function call uses M-Expression syntax and the data were S-expressions.
Then we have so-called S-Functions, which work with S-expressions. append is such an s-function.
McCarthy then defined a mapping from M-Expressions to S-Expressions, thus that M-Expressions could be represented (not just written, but also in memory) as S-expressions.
In the next step he defined new S-functions called apply and eval, which took M-Expressions as S-Expression data and computed the results of apply or eval operations.
Example use of apply:
apply[(LAMBDA,(X,Y),(CONS,X,Y));((A,B),(B,C))]
Thus these s-functions could compute with code which was represented at runtime by s-expression data.
Thus such a program would use both code in M-Expression format and compute with code in S-Expression format.
Since these S-functions apply and eval could be themselves translated to s-expressions and get executed, the specific S-functions apply and eval could get executed by a s-expression evaluator.
(apply (lambda (x y)
(cons x y))
(quote ((a b) (b c))))
Since programs thus were executed / computed as s-expressions, they were input, computed and printed as s-expressions.
Thus the idea of a simple s-expression meta-programming system made the idea of an additional step of m-expression syntax reading/printing less attractive.
You have a strawman example of piping via .member() because those () sometimes have arguments; that's what they are there for. Function application has not gone away; it's just combined with obj.member access. It can easily become an unreadable mess that will need some way of splitting across lines and indenting:
something.first(other.foo(bar.f(x, y)).memb, z).second(x.y()).third(a, b, c)
This:
(third
(second
(first
(something))))
is just function notation with the location of the opening parenthesis having been re-examined, and commas removed. Function application notation is found in a myriad languages: sin(cos(pow(x, 2))).
With the above indentation, it's very readable to me; it's very clear that calculation starts with (something) and moves in an outward direction.
(-> something first second third)
Right, yes, so we have threading macros, and people use them. That's not all that goes left to right. Lisp's ancient progn (including implicit progn) goes left to right, as do the arguments of functions and most macros:
Of course you could work around s-expressions (and make your ALGOL-formatted language looks like Lisp, a common complaint against my own code by my coworkers), that wasn't my argument.
Most aren't fond of Lisp syntax, regardless of how you dress it up, and thus writing in Lisps doom you to have fewer people to hand over your code to, and I don't think that scarcity is useful.
I suspect the reason most don't like s-expression is that it forces the human reader to maintain a "mental stack", an exercise humans are not too good at, as demonstrated by human languages aversion of center embeddings.
You seem to be stuck on this idea of deep function call nesting being an impediment, which is solved by foo.bar().baz() chained syntax using object dot notation.
But most mainstream languages have chained function call notation as a feature.
Furthermore, foo.bar().baz()... chaining is a fairly recently emerging idiom. It has been possible in a number of languages for decades already, but somehow didn't take off. You would hardly see chains of foo.bar().baz().xyzzy() in 1990 vintage C++ code bases, even though 1985 vintage C++ would easily support it.
Anyway, there is a dialect of Lisp which has integrated the dot syntactic sugar into S-expressions, according to this basic idea:
This is the TXR Lisp interactive listener of TXR 257.
Quit with :quit or Ctrl-D on an empty line. Ctrl-X ? for cheatsheet.
Poke a few holes in TXR with a fork before heating in the microwave.
1> '(quote x) ;; i.e. just like we have a 'x -> (quote x) sugar ...
'x
2> '(qref x)
(qref x)
3> '(qref x y) ;; we can have a x.y -> (qref x y) sugar
x.y
4> '(qref x y z)
x.y.z
5> '(qref x y z w)
x.y.z.w
6> '(qref x y 3 w) ;; (let's not when it's ambiguous with floating-point)
(qref x y 3 w)
7> '(uref x) ;; ... and a .x.y (uref x y) sugar
.x
8> '(uref x y)
.x.y
9> '(uref x y z)
.x.y.z
10> '(uref x y 3 w)
(uref x y 3 w)
Embedded compounds are possible, of course:
11> '(qref (a) (b) c (d) e f (g))
(a).(b).c.(d).e.f.(g)
I never intended this to be used for chaining! In fact, only in a fairly recent update to the list-builder object, did I fix it so it can do this:
Needless to say, the methods have to return the object in order to make this possible. Before the update, the methods didn't have a specified return value.
In the first place, list-builder is an implementation mechanism under the build macro, which expresses it like this:
so there is no reason to use list-builder directly in most code.
This chaining business is a minor benefit (if at all) of the dot notation. The main motivation is to make programming with structures and OOP more ergonomic. It has a big impact for programs that use data structures, because the use of data structures and OOP can pervade the entire program, and is a driver behinds its structure.
Anyway, anyone discussing S-expressions under the assumption that they do not have a dot notation that can be used for function chaining is simply unaware of the research having been done in this area in the TXR Lisp project.
Can you blame them though? Look at Python/Ruby where even non developers can sometimes understand the code and then look at some crazy Haskell/Lisp expression.
I think a lot of the readability issues come from lisp syntax being unfamiliar, not from some intrinsic impenetrability.
And then even more from being associated with functional programming, also unfamiliar to a lot of people.
Look at common lisp nested for loop[0]. Are they really that hard to read?
And then there's the minimalism of lisp syntax. Once you know how to call a function and the few data literals (list, vectors, set, ...), you know 90+% of the syntax.
Compare with python, where you also need to learn class syntax, annotation syntax, for loops, if, while, comprehensions, etc.
And the list keeps growing, there's new features being added continuously. In lisp, if the language introduces a new construct, it's still going to be just symbols in between two parenthesis.
I'd argue that because of the lightweightness of Lisp syntax compared to C-like languages, Lisp languages are easier to read. But then I'm a professional Clojure developer who used to do Golang, JavaScript and some other languages, but since couple of years ago only do Clojure and ClojureScript development, so I'm obviously biased.
But then Python is probably the worst example to compare Lisps to, as Python sees whitespace as a significant character that can affect if the program can run or not.
I'm sympathetic to the parents that it's not as bad as people say, but I'd definitely agree with you that syntax at least to me provides a lot of value. Of course, too much syntax is a problem in the other direction.
I really like Clojure, it's the language that finally made FP "click" for me. It was my go to for hobby/side projects for quite a while.
Dynamic typing is why I eventually switched. Haskell scratches the same itches that Clojure did, but the compiler and type system are immensely helpful, and keep saving me from tripping over my own feet.
Somewhat off topic: My problem with Haskell is that every time I've tried to read the documentation, I've felt like I needed a PhD in type theory to understand all of the terminology. As a practitioner (not a researcher), I just want to know how to do things, but the documentation has always been a roadblock to me. So, after a number of attempts at learning to use Haskell, I've decided its not for me. Not because of the language itself, but because of the traditions around it.
This. I think I understand the basic concepts, but you get a first big slap with doing your first curl to some other service. Ergonomics of the libs is often times horrible. You are in this constant loop, oh I can't do this I need algebraic derationalizer to align those types. Several hours later your curl request works. You start looking at wall to decompress. Curl is a pretty good example its a complexity that has been made super easy in a lot of languages.
I am very much waiting for some "extremely" constraint subset/flavour of Haskell that gets some adoption(I do not think I am alone in this lobby). No crazy stuff, no "just read the types", no "its just a small extension".
But I also know that it is not really feasible without breaking the IO enforcements.
Sadly a lot of people won't even try clojure since it is dynamic typed. I see their point but nevertheless clojure does something really well here. As the author obserserves designing around some core data structures results in high code reuse. A library like spec is also way better in encoding business requirements than all the mainstream language typesystems e.g. a number in business context has mostly constraints like a particular range etc. I really looking forward to see someone come up with a static type system inspired by clojure's approach.