Is it possible to create a programming language that has every possible feature all at once?
I realize there are many features that are opposed to each other. Is it possible to “simply” set a flag at compile / runtime and otherwise support everything? How big would the language’s source code be?
It seems that in practice no it's not possible based on what I've read from people much closer to programming language design and compiler work.
"In practice, the challenge of programming language design is not one of expanding a well-defined frontier, it is grappling with a neverending list of fundamental tradeoffs between mutually incompatible features.
Subtyping is tantalizingly useful but makes complete type inference incredibly difficult (and in general, provably impossible). Structural typing drastically reduces the burden of assigning a name to every uninteresting intermediate form but more or less precludes taking advantage of Haskell-style typeclasses or Rust-style traits. Dynamic dispatch substantially assists decoupling of software components but can come with a significant performance cost without a JIT. Just-in-time compilation can use runtime information to optimize code in ways that permit more flexible coding patterns, but JIT performance can be unpredictable, and the overhead of an optimizing JIT is substantial. Sophisticated metaprogramming systems can radically cut down on boilerplate and improve program concision and even readability, but they can have substantial runtime or compile-time overhead and tend to thwart automated tooling. The list goes on and on."
I do agree with this, but also, I don't really understand a lot of the tradeoffs, or at least to me they are false tradeoffs.
Her first example is excellent. In Haskell, we have global type inference, but we've found it to be impractical. So, by far the best practice is not to use it; at the very least, all top-level items should have type annotations.
the second one, structural typing: have your language support both structural types and nominal types, then? This is basically analogous to how haskell solved this problem: add type roles. Nominal types can't convert to one another, whereas structural types can. Not that Haskell is the paragon of language-well-designed-ness, but... There might be some other part of this I'm missing, but given the obviousness of this solution, and the fact that I haven't seen it mentioned, it is just striking.
on dynamic dispatch: allow it to be customized by the user - this is done today in many cases! Problem solved. Plus with a global optimizing compiler, if you can deal with big executable size, you have cake and eat cake.
on JIT: Yes, JIT can take some time, it is not free. JIT can make sense even in languages that are AOT compiled, in general it optimizes code based upon use patterns. If AOT loop unrolling makes sense in C, then I certainly think runtime optimization of fully AOT compiled code must be advantageous too. But, today, you can just about always figure that you can get yourself a core to do this kind of thing on, we just have so many of them available and don't have the tools to easily saturate them. Or, even if you do today with N cores, you probably won't be able to on the next gen, when you have N+M cores. Sure, there's gonna have to be some overhead when switching out the code, but I really don't think that's where the mentioned overhead comes from.
Metaprogramming systems are another great example: Yes, if we keep them the way they are today, at the _very least_ we're saying that we need some kind of LSP availability to make them reasonable for tooling to interact with. Except, guess what, all languages nowadays of any reasonable community size will need LSP. Beyond that, there are lots of other ways to think about metaprogramming other than just the macros we commonly have today.
I get her feeling, balancing all of this is hard. One think you can't really get away from here is that all of this increases language, compiler, and runtime complexity, which makes things much harder to do.
But I think that's the real tradeoff here: implementation complexity. The more you address these tradeoffs, the more complexity you add to your system, and the harder the whole thing is to think about and work on. The more constructs you add to the semantics of your language, the more difficult it is to prove the things you want about its semantics.
But, that's the whole job, I guess? I think we're way beyond the point where a tiny compiler can pick a new set of these tradeoffs and make a splash in the ecosystem.
Would love to have someone tell me how I'm wrong here.
Seriously tho, not really because programming languages are designed, and design is all about carefully balancing tradeoffs. It starts with the purpose of the language -- what's it for and who will be using it? From there, features follow function. Having all the features is counterproductive because some features are only good in the absence of others, others are mutually exclusive. For example, being 1-indexed or 0-indexed. You can't be both, unless it's configurable (and that just causes bugs as Julia found out).
If you want your language to be infinitely configurable to meet the needs of everyone, then you would want it to be as small as possible, not big. Something like Lisp.
However, can a lot more "features", or better, "programming paradigms" or as I would call them "architectural styles" be accommodated together, usefully?
Absolutely yes!
Objective-S does this using a combination of two basic techniques:
1. A language defined by a metaobject protocol
2. This metaobject protocol being organized on software architectural principles
(1) makes it possible to create a variable programming language. (2) widens the scope of that variability to encompass pretty much all of programming.
What was surprising is how little actual language mechanism was required to make this happen, and how far that little bit of mechanism goes. Eye-opening!
However, the site doesn't really have the explanation of how 1+2 work and work together. That is explained as best I could manage within the space limitation of an academic paper in Beyond Procedure Calls as Component Glue: Connectors Deserve Metaclass Status.
Alas, the PhD thesis that goes into more detail is still in the works (getting close, though!). That still being in the works also means that the site will not be updated with that information for a bit. Priorities...
> Is it possible to create a programming language that has every possible feature all at once?
Some random thoughts about this:
If languages are a tool of communication between programmers (there's that adage "primarily for humans to read, and only secondarily for computers to run") would this be a good idea?
Wouldn't each set of flags effectively define a different language? With a combinatorial explosion of flags.
An act of design is not only about what to include, but what to leave out. Some features do not interact well with others, which is why tradeoffs exist. You'd have to enforce restrictions on which flags can be used together.
You'd be designing a family of programming languages rather than a single language. Finding code in this language would tell you little, until you understood the specific flags.
I would assume not. Consider ARC vs GC. With ARC the programmer manages memory and with GC the compiler manages it. People choose one or the other because of the style they're going after. Could you set a flag for it. Maybe if all libraries you download were pre-compiled perhaps.
Personally, I just prefer to know that the team that supports the tool I'm using is dedicated to "that one thing" I'm after.
You could do it. You could allocate something as garbage-collectible or not at allocation time. You probably would want them to be different types, though, so that the type system keeps track of whether you need to manually deallocate this particular thing, or whether the garbage collector will do so.
It could be done... but I'm not sure what it would gain you. It seems to me that knowing that it will do one or the other is better than having to think about which it will do in each instance.
Nim language iirc can have either of arc or gc and even manual etc. via their Multi-paradigm Memory Management Strategies
Source: https://nim-lang.org/1.4.0/gc.html
Since all languages compile to the same representation on silicon (give or take a few opcodes) it would have to be a language with customizable grammar and runtime.
I for one would LOVE to make semicolons; optional and choose between {} braces/indentation per project/file, just like we can with tabs/spaces now (tabs are superior)
It depends how we're defining "representation", but I'd argue that languages are definitely more dissimilar here than they are they same. If you want to mix and match two compilation units with the semantics of different languages, even something as simple as a cross-language if-statement is going to be a hard bridge to cross, and even when targeting a single runtime it's easy to have a syntactic layer which doesn't efficiently map to that runtime.
That said, for the first thing they asked for (different syntactical views on the same "substrate," where I'm assuming the language has one model of how its runtime works), that's very doable.
No. It's a common enough term, but the handwavy concept I wanted to get across is that if you have code mixing and matching different syntaxes then there will necessarily be boundaries between those. Code with one syntax (if you can actually mix and match runtimes as the comment author said they want) will behave differently from "adjacent" (commonly a different file or directory, but I could imagine multiple syntaxes within a file too) code with a different syntax.
In common languages, you're usually still targeting the same runtime in different compilation units, but it's a rough description of optimization boundaries (you compile one unit at a time and stitch them together during linking). Some techniques bridge the gap and thus the language crispness (e.g., post-hoc transformations on compiled units, leading to one larger "effective" compilation unit), but you can roughly think of it as equivalent to a whole shared library.
Is it possible to create a programming language that has every possible feature all at once?
I realize there are many features that are opposed to each other. Is it possible to “simply” set a flag at compile / runtime and otherwise support everything? How big would the language’s source code be?