As someone who loves Lisps, I still have to disagree on the value of the s-expression syntax. I think that sexps are very beautiful, easy to parse, and easy to remember, but I think that overall they're less useful than Algol-like syntaxes (of which I consider most modern languages, including C++, to be in the family of), for one reason:
Visually-heterogeneous syntaxes, for all of their flaws, are easier to read because it's easier for the human brain to pattern-match on distinct features than indistinct ones.
The GP is factually wrong. There's plenty of empirical evidence to indicate that language influences thought, and that syntax is therefore important.
Although, I would point out that while your argument ad absurdum is generally reasonable (the fact that syntax can make the difference between a very good language and an unusable one), whitespace and malbolge also have terrible semantics that contribute to them being unusable.
As a former Lisp enthusiast (and still an enjoyer), I'd actually use my own darling as an example: Lisps have amazing semantics and are generally good languages. Their syntax is highly regular and structured and easy to parse...except that it's brain-hostile, and I'm convinced that it actively makes it harder to read and write - not just adopt, but actually use.
> There's plenty of empirical evidence to indicate that language influences thought, and that syntax is therefore important.
Are you talking about natural languages here? The so called Sapir Whorf thesis - in its strong or weak form - is rather controversial. There are some interesting findings, but the interpretation of them is still hotly debated.
In any case, none of the studies that I've seen (e.g. about colour perception, spatial reasoning, etc.) seem to be about syntax. I'd have to see some evidence that head-marking language speaker somehow think differently than dependent-marking language speakers and I haven't seen that.
> your argument ad absurdum is generally reasonable
it's a valid argument when somebody is speaking in absolutes, but I haven't seen GGP do that. There's a difference between saying "all syntax is completely arbitrary" and "syntax is not the point" - the latter suggests to me that if you stay within certain reasonable bounds (e.g. not be whitespace or malbolge), whether you use significant whitespace of braces, the language looks more like Pascal or like C, etc. are of minor importance in the grand scheme of things. Which is something you may disagree with, but it's a much more reasonable point that anything you can just counter with "but whitspace!".
> The GP is factually wrong. There's plenty of empirical evidence to indicate that language influences thought, and that syntax is therefore important.
I never said anything to the contrary. I specifically stated that syntax was not an important consideration in the design of Pony language. That does not imply the numerous strawmen that you and others attacked here. As Tainnor correctly and honestly noted:
"it's a valid argument when somebody is speaking in absolutes, but I haven't seen GGP do that. There's a difference between saying "all syntax is completely arbitrary" and "syntax is not the point" - the latter suggests to me that if you stay within certain reasonable bounds (e.g. not be whitespace or malbolge), whether you use significant whitespace of braces, the language looks more like Pascal or like C, etc. are of minor importance in the grand scheme of things. Which is something you may disagree with, but it's a much more reasonable point that anything you can just counter with "but whitspace!"."
Forget handwriting - I rarely see anything with as much care and effort put into it as this project.
I like to think that I put a lot of ~~craftmanship~~ into my code, but the effort put into every single letter of a roughly fifteen-thousand-word book (to say nothing of the letter at the beginning of the chapters or the illustrations) is on another level.
For Common Lispers such as myself, who are vaguely aware of developments in the Scheme space: the most important difference between CRUNCH and Chicken appears to be that, while both compile down to C/object code, CRUNCH is additionally targeting a statically-typed subset of Scheme.
Opinion: this is great. The aversion of Lispers to static types is historical rather than intrinsic and reflects the relative difference in expressiveness between program semantics and type semantics (and runtime vs tooling) for much of computing. Now that types and tools are advancing, static Lisps are feasible, and I love that.
I don't believe it's not intrinsic. A lot of the reason why Lispers may be averse to static types is because of the perceived inflexibility it can induce into the system. Lisp programmers don't want to be told what to do, especially by the compiler. Some CLs like SBCL have allowed some form of inference through the standard type declarations in the language. This leads me to believe that the 'right thing' in the case of Lisp is a combination of dynamicity and some stronger typing features that can be applied during the optimization stage. The dynamic nature and ease of use of Lisp relative to its performance is one of its greatest assets: it would be nearsighted to try and sacrifice that--a good Lisp programmer can optimize the important parts of his programs such that they're comparable to or even outperform their equivalents in other more ``high-performance'' languages. With that being said, these developments might bring us closer to a ``sufficiently smart compiler'' that could make that latter stage mostly unnecessary.
> A lot of the reason why Lispers may be averse to static types is because of the perceived inflexibility it can induce into the system.
This perceived inflexibility is what my comment was getting at - that for primitive type systems available back in the 80's, yes, the types significantly constrained the programs you could write. With today's type systems, however, you have far more flexibility, especially those with "Any" types that allow you to "punch a hole in the type system", so to speak.
When I tried typed Python a few years ago, I found out that, to my surprise, 99% of the code that I naturally wrote could have static types attached (or inferred) without modification because of the flexibility of Python's type system.
I also learned that types are a property of programs, more than just languages. If a program is ill-typed, then having a dynamically-typed language will not save you - it will just crash at runtime. Static types are limiting when either (1) they prevent you from writing/expressing well-typed programs because of the inexpressiveness of the type system or (2) it's burdensome to actually express the type to the compiler.
Modern languages and tools present massive advances in both of those areas. Type systems are massively more expressive, so the "false negative" area of valid programs that can't be expressed is much, much smaller. And, with type inference and more expressive types, not only do you sometimes not have to express the type in your source code at all (when it's inferred), but when you do, it's often easier.
The "Any" type is really what steals the show. I don't think that there's a lot of value in a fully statically-typed Lisp where you can't have dynamic values at all - but I think there's a lot of value in a Lisp with a Python-like type system where you start out static and can use "unknown", "any", and "object" to selectively add dynamic types when needed.
Because, being a Lisper, you probably think like me, I'll give you the idea that really convinced me that types are positive value (as opposed to "only" being small negative value): they enable you to build large, complex, and alive systems.
Types are a force-multiplier for our limited human brains. With types, you can more easily build large systems, you can more easily refactor, you can start with a live REPL and more easily transition your code into source on disk. Types help you design and build things - which is why we use Lisps, after all!
I don't disagree with you there. The only thing CL really misses out on is that its type system isn't specced out enough to be as powerful as it could be. Since they're just macros you can write all kinds of crazy types, but non-terminating ones just might not work if your implementation doesn't handle them a certain way. This was actually a disputed issue: https://www.lispworks.com/documentation/HyperSpec/Issues/iss....
Being able to declare types is the reason why I switched from Scheme to Common Lisp. It's just a shame that there's basically no concept of generic types and 'satisfies' isn't quite good enough to make up for it.
I don't think it makes sense to conflate Lispers to Schemers. I've programmed in both languages but have a stronger affinity for Scheme partially because semantically it is less flexible and more "staticy" than Lisp. Philosophically, the languages tend to attract different personalities (to the extent that the highly fragmentary Scheme world can be characterized).
I want static types in a higher level assembly language for systems programming. That's because I want to work with machine-level representations, in which there are no spare bits for indicating type at run-time (moreover, using such a language, we can design a type system with such bits, in any way we please).
I don't want static types in a high level language.
It's just counterproductive.
We only have to look at numbers to feel how it sucks. If we divide two integers in Common Lisp, if the division is exact, the object that comes out is an integer. Otherwise we get a ratio object. Or if we take a square root of a real, we get a complex number if the input is negative, otherwise real.
This cannot be modeled effectively in a static system. You can use a sum type, but that's just a greenspunned ad hoc dynamic type.
> Now that types and tools are advancing, static Lisps are feasible, and I love that.
Haven't that been feasible for a pretty long time already? Judging by how well-received (or not) they've been, it seems there isn't much demand for it. Things like clojure.spec and alike (compile-time + run-time typing) seems much more popular, but isn't static.
People make a lot as their side projects through the easy parsing, what software is being made with them? (besides the usual hacker news backend response)
Type inference is probably the biggest thing. You would need explicit "phases" to expand macros, disallow macro expansion at runtime, and implement bi-directional type inference HM-style to get even close to what OCaml has.
To be honest, I'd kill for a Lisp that had the same type system as OCaml, but I suspect the closes we'll get is basically Rust (whose macro system is quite good).
Most of those aren't really ready for production use except maybe Typed Racket, which I consider to be too "weak" and took a route with annotations that I'm not a fan of. Coalton is very interesting, I've been following it for a bit. Carp [0] is another one that I've been following.
Coalton is used in production for quantum computing systems and soft real-time control system automation. There are also (a small number of) Coalton jobs.
Quantum computing control systems is exactly the domain I've spent about half a decade doing, it's really not the production-like environment you think it is. Speed of iteration and flexibility to allow for changes to hardware is tantamount to success. It's also a lot easier to accept risk to breakages in the language when the author works at your company too.
> I suspect the closes we'll get is basically Rust
Rust is categorically different from all of these other things. Lisps (and OCaml) all have interactivity as a core language feature - Rust is as non-interactive as it gets.
There is a simpler solution than type inference to removing type annotations while retaining types: remove the distinction between variable and type names. The way that you handle multiple instances of a type is to define an alias for the type. In pseudocode it might look like:
type Divisor Number
def divide(Number, Divisor) = Number / Divisor
As compared to:
def divide(number: Number, divisor: Number) = number / Divisor
I have implemented a compiler for a language that does this with different (less familiar) syntax. It's a bit more complicated than described above to handle local variables but it works very well for me.
I'm not super familiar with the details but I've heard that Shen has a really good type system. Aditya Siram addresses types at 12:00 in this video: https://youtu.be/lMcRBdSdO_U
Layout is so difficult that it made me quit using Common Lisp and ncurses to build my passion project and become the very thing I swore to destroy (a React developer).
I can't be the only one who wants a simpler layout language than CSS that's designed with two decades of hindsight to provide the maximum simplicity-expressiveness product. Are there any serious projects to engineer something like this, or has everyone given up and either embraced CSS3 (waiting for the LLVM backend) or gone back to plain text?
Author here, and I also teach web dev, including CSS, at the University of Utah (including this semester). Newer parts of CSS, like flex-box layout are both simple and powerful. Just use those! I think it's important to start thinking about learning all of the Web Platform like you'd think about learning all of the Windows APIs or all of the Linux system calls or all of your favorite programming language's features. People rarely do! (I have 15 years of Python experience, and I do not understand metaclasses or async.) There are lots of weird obscure corners, but you don't need to know those to build websites.
Are there any resources on learning to design simpler layout systems (like flexbox + any other important parts) without having to adjust the design to compensate for older systems (e.g. if you were to try to implement CSS).
I was playing with Solvespace a few weeks ago, and the thought occurred to me that the constraint-based modeling approach is exactly what I want in a layout system, and it extends to 3d even. We're stuck with CSS for now, but this must be the future.
This is super neat - SBCL is an awesome language implementation, and I've always wanted to do CL development for a "real" game console.
I'm also surprised (in a good way) that Shinmera is working on this - I've seen him a few times before on #lispgames and in the Lisp Discord, and I didn't know that he was into this kind of low-level development. I've looked at the guts of SBCL briefly and was frightened away, so kudos to him.
I wonder if SBCL (+ threading/SDL2) works on the Raspberry Pi now...
I'm not doing the SBCL parts, that's all Charles' work that I hired him for. My work is the portability bits that Trial relies on to do whatever and the general build architecture for this, along with the initial runtime stubbing.
My current unannounced project is a lot more ambitious still, being a 3D hack & slash action game. I post updates about that on the Patreon if you're interested.
I'm currently very slowly making my way through Geometric Algebra for Physicists by Doran and Lasenby. The book is a delight to read, but I'm not a mathematician, and this article is showing me that my small amount of understanding is...not nearly as deep, and especially not nearly as rigorous, as I would like. I should try to re-read with Eric's criticisms in mind.
For a physicist it eventually becomes necessary to understand exterior algebra.
This is often done in the context of differential forms, but of course can be brought back to vectors easily. With those well established tools GA doesn't offer much. This blog post seems to point out exactly this fact.
I always come back to Schutz's Geometrical Methods of Mathematical Physics as my reference for notation, but I agree. I came to this by way of General Relativity, so that colors my perceptions. The few treatments of GA that I've looked at (briefly) weren't very clear about the distinction between 1-forms and 1-vectors and seemed to assume Euclidean metric everywhere, so I left thinking that it seemed a little weird and not quite trusting it.
In any case, my experience is that the coordinate-free manipulations only go so far, but that you pretty quickly need to drop to some coordinates to actually get work done. d*F=J is nice and all, but it won't calculate your fields for you.
> It's true that any kind of pay-per-use would be a hard, hard sell, though. Who wants to have to think about whether every click is worth the nickel it's going to cost?
I've heard this argument before, but there's a common existence proof to the fact that it's possible: video games. People who play many different kinds of video games (RTS, MOBA, MMO, RPG) get used to making decisions as to whether to buy things many times an hour with barely any cognitive load - their brains just get used to working with smaller units of time and money.
And why shouldn't they? I found sources online that say that a YouTube video earns about $5 per 1k views, or 0.5c per view. If I have to pay half a cent to watch a video, even a short five-minute one, that's almost below the threshold of caring, and even those making median income are probably going to be constrained by the actual time that they have available to watch, rather than the cost of the videos. People will spontaneously spend $20 to go out to eat - after the initial adjustment to a micropayment system, they should have very little trouble spending 60 cents to watch YouTube for five whole hours after work, especially if the micropayment system has common-sense features such as clearly showing your wallet balance over time, how much you've recently spent, and how much longer your balance will last at your current rate of consumption.
Now, to be fair, the fact that it's possible, and that people will quickly get used to it after they spend some time with it, doesn't mean that people will be interested in trying it in the first place, and that's a much harder problem, because subscription services are more lucrative for companies. I think the only way to get micropayments off the ground would be a grassroots movement supported by a bunch of content creators making their stuff available on a micropayment platform. Otherwise, companies that move away from ads (e.g. Google) will just turn to subscription services to lock their users in.
Visually-heterogeneous syntaxes, for all of their flaws, are easier to read because it's easier for the human brain to pattern-match on distinct features than indistinct ones.