I have been writing scala for 2 years, mostly in spark and flink and as much as I love parts of it, overall, the language is just not the most approachable. Some of the features are super valuable every once in a while, but I might gladly give some of them up for a streamlined language that is easier to get people started on (while not just writing imperative style Java with Scala syntax)
My other curiosity is compatibly. If I can compile a subset of my project much faster that doesn't need all of Scala, that seems like an okay place to be in.
Check out Kotlin. I picked it up in like half a day. It wasn't idiomatic Kotlin but w/e. It can compile to JS and you can use just about any JS lib from Kotlin (which is insanity). You'll be able to write ios apps with it soon enough.
He's using Spark and Flink... Given that I'm also in that big data space, I'm certain the tooling provided by Kotlin is strictly worse than Scala for those applications.
> It will be interesting to see what the subset is.
Totally. Its interesting that they approach this from a compile speed angle, I wonder how much they are also considering the mental load of certain features.
This might become an interesting project for companies already doing Scala. New ones at this point will probably prefer Kotlin.
> I have been writing scala for 2 years, mostly in spark and flink and as much as I love parts of it, overall, the language is just not the most approachable. Some of the features are super valuable every once in a while, but I might gladly give some of them up for a streamlined language that is easier to get people started on (while not just writing imperative style Java with Scala syntax)
Any concrete ideas? In my experience the parts people find intimidating often aren't actually language features, they're things some library or other implements.
I mean there are things I would remove from Scala if I were in charge - structural types (provided there was still a way to express partially applied types i.e. standardize kind-projector in the core language), Dynamic - but they don't tend to be the parts that trip people up.
Not the same thing. The thing that slows down compilation is recursively derived implicit parameters for typeclasses, which don't tend to be a problem for reading - you just write "myObject.toJson" (and behind the scenes it recurses through the structure of myObject at compile time, ensuring that all of the nested types bottom out in values that can actually be serialized to JSON, and compiles to something of similar efficiency to manually naming all the fields). The thing that causes readability issues tend to be overuse of implicit conversions, and the community has largely moved away from that, thankfully.
This is interesting. Would it be helpful, and possible, to add a "YOLO mode" to the Scala compiler so that it just make-believes that recursively derived implicit parameters bottom out properly? Maybe force the caller to give an explicit type (even if the type is wrong). I'd do that to reacquire some semblance of quick compile times.
Then, when the developer has finished they can switch off "YOLO mode" and start playing whack a mole with the types, as needed, punctuated by longer compilation times.
> This is interesting. Would it be helpful, and possible, to add a "YOLO mode" to the Scala compiler so that it just make-believes that recursively derived implicit parameters bottom out properly?
It's not possible to actually build, because at the end of the day you are (presumably) using the implicit for something (e.g. JSON-serializing the value). You could explicitly pass ??? where the implicit is wanted to get a runtime failure where it's used. I guess it might be possible to have tooling do that "magically", but I'm not sure how useful that would be; if you're working on that particular area you can use ??? by hand, if you're not working on that area then you presumably won't be rebuilding it since you're presumably using incremental compilation anyway.
What is possible is to fail-fast when resolution fails, and not check for duplicates when resolution succeeds - one main reason for the blowup is that at every stage of recursive resolution you have to check whether the implicits were ambiguous. There's a proposed fix for that piece: https://github.com/scala/scala/pull/5649
Typescript is kind of like that. You can run the compiler with the type validations disabled. That can lead to potentially broken code but it can also be significantly faster than waiting for all type checks to complete.
The idea is to have fast compilation most of the time (local dev) and slower/full compilation during the "real" build.
Here's a testing analogy: My CI server runs all tests, every time. My local machine only runs the ones I tell it to run as if I change one small piece of an app then I'm not going to wait for the entire test suite to run.
Depends completely on how the implicits are being used. The older form of implicit conversions are pretty rough, but the more modern trend towards implicit enhancement IMO can improve readability.
that can be mitigated with IDEs showing marks on implicit uses, with mouse-over-to-peek-definition features (hope this lands on intelliJ... and hopefully a free license for me lol)
I am a bit concerned that between the Scala, Dotty, Typelevel, and now the Reasonable Scala Compiler, that the Scala community might become fragmented. Hopefully the rsc's claim of "making our findings available to the Scala community at large" holds true.
Edit: should have keep reading before commenting. In their related work document:
>we are definitely not declaring Lightbend Scala dead. To the contrary, we view ourselves as complementary to Lightbend Scala and hope that our results will create new opportunities for the official compiler. We are planning to keep in contact with Lightbend to discuss our findings and facilitate technology transfer.
scala and dotty will be one and the same in the next major version. the rest are niche tools. no one should be afraid to develop their own compiler or tool chain for fear of fragmentation. the lessons learned can only help the trunk
I'm interested in what they said about reducing the compiler to 4 passes. Dotty takes a really interesting approach to this with 'miniphases' (https://infoscience.epfl.ch/record/228518/files/paper.pdf). I think it'll be interesting to compare these two methods when both compilers are production ready.
This truly means we still lack a programming language with:
1. reasonable support of object oriented programming.
2. reasonable support of functional programming.
3. solid concurrency features.
4. runs comparable to native code.
5. simple to learn.
Even though it is not difficult to create such a language but we see a new language poping out every now and then and none of them try to solve these issues.
1. Solid mix of functional programming and Object oriented programming (though FP is more important - I'd gladly give up inheritance if I get good support for interfaces and traits instead)
2. Static type checking
3. Good support for efficient immutable data structures.
4. Solid concurrency features
5. Easy cross-compilation to and interaction with JS
5a. Cross-compilation to iOS and Android would be nice but it's not a dealbreaker
6. Good tooling (i.e. IDE-support, package manager)
7. A solid standard library
8. An active open source community around it
9. Said open source community should contain a good web MVC framework (or sensibly pluggable modules to the same effect)
10. A REPL
11. Sensible compilation times
So far it looks to me like only Scala and Haskell are candidates - and it sounds like Scala really struggles with the compilation times. If Reason/ocaml gained more traction it might become a contender. And if I gave up on static type checking then Clojure would be an option. And if I settled for a sprinkling of functional programming instead of having that be the basis, then Kotlin or maybe C#. Any other suggestions?
I think you've covered the main options. At a guess: Maybe F# if you're willing to go with a smaller open-source community. Maybe Rust or Idris if you're willing to deal with something a bit less mature with less tooling. Maybe Swift, Crystal or Nim, though I know less about them and their communities seem smaller.
Besides the options you mentioned I really think Typescript is a high contender on that list. Even though it's just Javascript with typing it ticks most boxes you mention. Depending on how you value some features (e.g. nr. 5) it might even be the best option.
Imho they are about the same, or maybe TS is even a little bit better, since you can really have freestanding functions (C# has either (static) member functions or delegates). The Javascript ecosystem probably also leans a bit more towards FP style in the mainstream (with lodash, ramda, redux, etc.) than C# does. What are you missing in particular? I guess the main thing for both is tail recursion.
Does TypeScript have any kind of "do notation" or "comprehension" syntax? LINQ is not the best by any means but it does seem like a big improvement over nothing.
No, it doesn't. I guess that's because monads are not considered idiomatic typescript. And for the special case of async programming there's async/await support.
These all exist, you just don't agree with them being reasonable:
Scala, Common Lisp, C++, D, Clojure, F#, OCaml, Kotlin, etc. Those could all count or not depending on your subjective opinion.
The truth is, it is difficult to create the perfect language, and any sufficiently complete language will always start to have part of it that suffer in exchange.
> Scala, Common Lisp, C++, D, Clojure, F#, OCaml, Kotlin, etc.
Out of those, only OCaml really provides “reasonable” support for functional programming. The ability to manipulate compound values (say, lists or trees) directly, without using objects having a queryable physical identity, is of course a prerequisite.
> it is difficult to create the perfect language, and any sufficiently complete language (...)
Maybe the excessive amount of features is the problem in the first place.
User catnaroek was only talking about support for Functional programming. And they're right: it's hard to reasonably support typed functional programming without sum types.
And we're talking about a language dammit, not libraries or package managers. Let's have a good language first, then build the tools around it.
It's even worse: it's hard to reasonably support functional programming, typed or untyped, when list and tree values aren't directly expressible in the language's semantics. If you need a social convention not to query physical object identities, you have already lost.
The lack of a good standard library can always be fixed: write one. OTOH, language semantics issues can't be fixed without redefining the language, invalidating all code that has already been written in it.
- You ask for reasonable support for both object-oriented and functional programming, this already contradicts with the notion of state (that objects suppose to contain in OOP) and "stateless" ways of FP, let's ignore that for a moment
- FP means different things for different people - some would argue it has to have well thought type system, but what exactly that means? Is strongly but dynamically typed like in Clojure is okay (with added sugar like Clojure.spec)? Or it has to have static types? Does it have to have then type inference like in Elm? Or definitely it has to have algebraic data types like in Haskell? Or maybe dependent types like in Idris?
- Concurrency models vary too, for example: STM in Clojure and Haskell aren't exactly the same. Actor model in Erlang is a beast and sounds awesome on paper, but in real life process coordination may become a headache. Primitives based on CSP model are also known to have their own quirks. Concurrency in any language is never simple and straightforward.
- You're asking for an fast FP language - with immutable data structures and lazy/non-strict evaluation it's probably unrealistic to get it to run as fast as native code
- And "simple to learn" is also very subjective quality - for some Python is easy, but Clojure is hard. For some Haskell is not that difficult as people describe it. For some it feels like insurmountable mountain.
As you can see - it's not that simple. And we probably not gonna get a "perfect" language that suits everyone anytime soon.
I would argue Scala already has all of them but the last one. However there are shit load of absolutely unnecessary features in the language. As the parent link states, twitter just removed shitty features and there you go: An easy to use Object/Functional language which runs fast and has awesome concurrency features.
It's difficult to agree with a definition of FP that doesn't require expressing as much computation as possible as function evaluation. Functions being mappings from values to values, having a rich set of values (not the same thing as objects!) is a prerequisite.
I'd have said the fact that no-one's done it is pretty good evidence it's not as easy as you think. As plenty of people have pointed out, FP means different things to different people. But so does OO. In fact, I'd argue that Scala is about as close as you're going to get to your spec. Not helped by the fact that, as any Scalazzi could tell you, the requirements of 1 and 2 contain contradictory elements and make complexity shoot through the roof. (You can get fewer FP features and less complexity by going for F# instead.)
Personally, I like Haskell. It's great for 1 and 3, 4 is pretty acceptable and 2 gets thrown in the bin.
Not a trendy language by any means, but C# meets all your criteria.
1. About the strongest OOP support of any language which doesn’t have a dynamic or Scala-esque exotic type system. Also supports non-OOP imperative, unlike Java.
2. Linq is nigh-on the best FP support offered in any non-FP language’s standard library, and really is one of the best FP libraries period.
3. Invented async/await, by far the most ergonomic concurrency primitive yet created.
4. Fast and easy to tune for performance (stack-allocated value types which can also be passed by reference).
5. If you know Java or C++ you’ll be productive within minutes.
> 3. Invented async/await, by far the most ergonomic concurrency primitive yet created.
This feels like a bit of hyperbole. The actor model seems much more intuitive than async/await. Indeed, message passing between actors is how concurrency works in the real world (think about how people interact with each other).
I knew I'd get pushback on that one, it is somewhat hyperbolic; I'm here using "most ergonomic" to mean "closest source-level similarity to the synchronous equivalent." Which is perhaps not the "best" way to model concurrency, but has the advantage of being very easy to learn.
I wouldn't say Rust is simple to learn. The borrow checker alone is quite a steep hill (though the compiler errors are very good).
In my limited experience Rust and Swift both suffer from boilerplate/ceremony you have in many other OOP languages (C# too). Static languages for you I guess - there are benefits too...
Kotlin doesn't really support FP in the modern sense of the term; effects are all special cases in the language (null, async), so you can't abstract over them generically (e.g. you could never implement "traverse" in Kotlin because you can't even write the type signature it's supposed to have) and you can't use a custom effect if you want to do something the language designers haven't thought of. E.g. there's no nice way to do validation where you want to have a message in the case of failure (in some ways this is actually a regression from Java, which at least had checked exceptions for this kind of use case, not that they're a great solution).
The thing that Scala has and Kotlin completely lacks is the tooling to allow you to avoid runtime reflection. Typeclasses and macros (and especially both together) are way too powerful-yet-principled to give up once you're used to them.
Annotation processing is impossible to reason about; it's equivalent to macros but much less visible.
Most Scala doesn't need macros (other than those inside shapeless) because Scala has a) higher-kinded-types, for/yield and an existing library for working with context-like types, and b) generic traversal of object graphs via shapeless. Between them those cover virtually all the use cases for annotation processing, macros or anything like that.
I agree that most application-level code doesn't need new macros (you rarely have to develop a new macro to add a feature to an application) but there are definitely a lot of very legitimate use-cases for them outside of Shapeless. To give just a few:
Even for cases where inductive implicits can do the job, there is still a bit of a push to prototype with shapeless and then rewrite with direct derivations of your instances later on to improve compile times. See e.g. https://github.com/circe/circe-derivation.
macwire and scala-async are bad ideas in my book (having worked on codebases that use them) and I suspect the same of machinist and refined; they're both making too big a change to the language to make sense as a library, the cost/benefit doesn't stack up. (I do think prototyping language improvements is one other thing macros are good for, but that's not something you'd do in production code). quicklens looks like a reimplementation of the same stuff that shapeless does, so not really a different use case.
There are performance problems with inductive implicits in the current compiler; I'd rather see that fixed at the compiler level (I believe there's already a PR?) than see every library try to work around it. Actually I'd like to just build something like Shapeless' functionality into the language proper; I'd say e.g. case classes and sealed traits should have come with the equivalent of LabelledGeneric already.
macwire and scala-async are bad ideas in my book (having worked on codebases that use them) and I suspect the same of machinist and refined
We'll just have to disagree on these sorts of "power tools", I think.
Quicklens does a lot more for you than Shapeless's lenses, which to my knowledge not very many people are using for new projects at this point. I think once the Cats port lands, I'll be moving to Monocle for my own stuff, though (I think this will be the trend).
Of course, the compiler getting better about inductive implicts can only be a good thing, but I do think there may be some upper limit to what can be done short of an effort similar to what Eugene is taking on with Reasonable Scala, since they are using a typechecker as an unwitting computer when we program that way. The work that Miles did to improve inductive implicit performance is tremendously helpful but it does not fully solve the problem. As an example, my very-induction-heavy codebase compiles about twice as fast with his patch enabled. That's nothing to scoff at and is super-impressive for a single patch, but it doesn't make the problem evaporate.
I fully agree with you that it would be great if Generic-like stuff were baked into the language. To me, that sort of thing is the only compelling use-case for whitebox def macros. I feel like things would be better if we could just drop those.
I find continuation-based stuff impossible to reason about; the only way I can understand the flow is in terms of values i.e. futures. And if we're working with futures I'd rather have them work as a plain old monad with for/yield that looks like any other monad, rather than having to remember what some unique future-specific syntax desugars into.
> Annotation processing is impossible to reason about
Maybe for you but given how popular annotation processing is on the JVM (and especially on Android), plenty of people are reasoning with them just fine.
Actually, it's so popular and easy to use that it's slowly replacing reflection in pretty much every library I've seen.
"Better than reflection" is a low bar, and I'd suspect it's more popular on Android precisely because better alternatives like Scala are less popular there (and also because Android codebases are smaller, so can get away with more unreasonableness).
In my experience, annotation processors are definitely way harder to reason about than def macros in Scala (especially blackbox ones). I find them to be about as hard to reason about as macro annotations despite being less powerful. They are also way harder to write and distribute and not at all an analogue to typeclasses.
Kotlin looks very nice. But I think there are languages with better concurrency support - Elixir (cheap green threads in the VM) and Scala (actors and futures) come to mind.
I think it really depends on your use case too. Kotlin is great for mobile apps and games, where coroutines are fantastic (see also C#/Unity there). For writing scalable web services, you need more control and resilience. There Elixir and Scala are the heavyweights.
> Identify a subset of Scala that can be compiled with reasonable speed
I am afraid of Scala fragmentation. There is no way new compiler can be 100% compatible. Hopefully it will be tested on major projects, or there will be some official Scala Language spec.
It would be nice to add a switch into old scalac, which downgrades its features, to make it compatible with new scalac.
I believe Scala has too many features, and it will be very difficult to fix compilation speed. But there could be decend boost just by rewritting compiler from scratch. If anything we will get better error messages.
Compilation speed was major reason why Jetbrains started Kotlin.
Disclaimer: I worked with Scalac / IDE back in 2009. Kotlin fanatic.
> There is no way new compiler can be 100% compatible. Hopefully it will be tested on major projects, or there will be some official Scala Language spec.
I, for one, think that an official language spec would do the language a lot of good. As it stands, it is sometimes difficult to separate bugs from features in scalac...
I am not sure how complete/correct it is, but there is one for 2.9 [1]
Martin Odersky & co have been focusing most of their effort the last few years on Dotty (aka Scala 3), which succeeded in giving in giving Scala a proven theoretical basis in exchange for a few esoteric typesystem features they couldn't prove. [2]
There are also specs for later versions, it's just that they are pretty outdated and unmaintained. (Like pretty much everything related to documentation.)
I think Jetbrains has some experience using that spec to implement the IDE's Scala typechecker. Long story short: years later, it still doesn't work.
I would be kind of concerned about the esoteric new features they added to Dotty, looks like the lesson has not been learned yet.
- "Numeric harmonization" which makes some of the existing issues with implicit numeric conversions even worse.
- Multiversal equality, which adds even more special rules to the implicit resolution algorithm, complicates the mental model of the language, is either extremely invasive or severely limited when applied to real-code, and falls apart when dealing with existing language features of Scala like variance.
- Addition of inline, which has absolutely no reason to be a new keyword.
- The tentative idea of adding T? for T|Null (as it appeared on some slides) which makes it obvious that no thought has been given to how the lessons of handling nulls apply to Scala.
Then we have enums, which are just poorly designed and executed, repeat the mistakes made with both case classes and scala.Enumeration, do not address the problems it is supposed to solve, fail to address actually valid existing problems, all while introducing not one, but two additional syntactic constructs which are unlike anything we had before.
On top of that we got incompatible additions in minor releases of 2.12:
- Additional places where commas can be added, which ignores one of the most common complaints about Scala: Too many syntactic variations to express the same thing.
- The addition of @showAsInfix which is a solution in search of a problem.
But at least unsound type projections got restricted, I guess. (That's the only major thing I can think of that will have a larger impact...)
Probably the removal of forSome and cleaning up existentials?
Twitter has pretty big resources. It's probably feasible for them to spin up a build farm that runs their compiler against all active Scala projects on GitHub, measuring error count and compilation speed as they go along.
I'm an intermediate Scala user. Scala compile times hurt my productivity significantly. Our best Scala developer says his approach is "I know the language so well, I usually write the whole feature in one go before compiling. That way I avoid the slow compile times."
Which feels to me like you will miss one of the advantages of having a compiler in the first place - that it helps you write correct code.
To be fair, in an IDE you do get some of the compiler feedback in real time. But not for the whole code base, if you're working on things that have high coupling with other modules then it can be really painful. (I'm looking at you, Spray serialization!! Ugh)
Depends on the size of the code base, how good your incremental compilation is, and what percentage of the code changes frequently. At Twitter the values are: large, great, and relatively high.
I've seen a lot of complaints about Scala's compilation speed (I believe it's either the 1st or 2nd most common complaint), but I've not really had any issues myself.
Could people who have had issues with compilation speed explain their workflow, I'd like to better understand the problem.
For example, here's my general workflow:
1. code in IntelliJ until the feature/fix is complete, and there aren't any red-wavy lines in my source tree
2. run `compile` in an already running sbt session.
As a result, I only find myself "waiting" for a compile a few times a day. And even then, because I'm using the incremental compiler it's usually only a few seconds.
The only reason I run the sbt compile at all is because the IntelliJ compiler is known to be a bit buggy, especially around things like implicit resolution and some of the more advanced features used for abstract programming.
I have seen CI builds take some time, but even then, in every project I've worked on, the complete compile-time is dwarfed by the time taken to run tests.
I'm not trying to dismiss problems that others have; I would love to learn more about them!
Here is one that I run into frequently. You're building a CRUD app, but want some semblance of sane FP interaction with your database. So you use Doobie [1] because free monads and all.
So you type your query using a Doobie SQL string interpolator...
sql"SELECT ItemId from dbo.Item WHERE IsOnSale = TRUE"
...but then you forget all of the ".query[Int].vector.transact(tx).unsafePerformSync" crap that you are supposed to tack onto the end of the string to actually run your query. What are your options?
1. Type a '.' character in your IDE and watch it crash while the presentation compiler thrashes around, sifting through a combinatorial explosion of implicit values (or whatever it is that makes it so slow). You quickly learn to stop using autocomplete. In a sane world, this should take milliseconds and you should quickly get a list of methods to guide you down the right path.
2. Type a '.' and try to recompile. Wait 30 seconds. While you're waiting, what work can you possibly do??? None.
3. Stop what you're doing, open up The Book Of Doobie [2], try to remember which page has the syntax that you need for doing this one simple thing, go to that page, navigate to the right part of the webpage, read the crap, think a bit, copy/paste it into your code. This FEELS more productive than (2), but is it really?
This is just one example, maybe not terribly compelling, but it happens. People (well at least me) have a limited amount of memory for administrivial crap to hold in our heads while we try to get our work done. Tooling is supposed to help with that, but with Scala it really starts to get in the way. Flow state just isn't possible.
I have heard anecdotes of developers running two or more instances of scalac so that they can work on more than one feature at a time in order to not completely waste away their time. Imagine the context switching there, and having to pay attention to the little oven timer and switching context when one compiler is finally done recompiling the same code it already compiled hundreds of times.
Thanks! I haven't used doobie yet (though I intend to), I had no idea it's API caused IntelliJ such an auto-complete headache.
I'm curious how a faster scalac would help in this case though. Perhaps I'm missing something, but scalac's errors don't really facilitate this kind of API exploration.
I completely agree that exploring an API (because who wants to memorise the standard library and API of all your dependencies) in IntelliJ is often quite cumbersome. One trick I've come to lean on a lot is to type '.ensur' to determine the type of the current term; this causes IntelliJ to present the method signature for 'ensuring', which is available (implicitly) on everything (except Nothing), showing the current type in its return type.
FWIW, the author of Doobie doesn't even use an IDE, which is par for the course I believe for users of Scalaz, Cats, Shapeless, etc. inference heavy libraries.
Abstraction isn't free in Scala, you pay for the features used; tooling suffers as a result, thus projects like Twitter's RSC come into being.
You may want to checkout Quill, or Perhaps Slick (though the latter I suspect is similarly IDE challenged). Barring that, give your IDE loads of RAM.
They're not a complete non-issue. They're just much less of an issue than not using Scala.
(And bear in mind that Twitter has literally the largest Scala codebase in the world. Almost all scala programmers work on something an order of magnitude smaller, with correspondingly smaller compile times)
One of the huge benefits of strongly typed language is the ease it introduces to managing large code bases. Now imagine google size monorepo and scala compilation speed
It depends on project size, twitter likely have some really huge scala project, but many organization are now going for microservices, so compile time is less of an issue
Overall, I feel like with the success of Go and Kotlin, having an easy to learn language definitely wins. Faster runtime speed might be a bigger 'market' than wanting fast compile times, which matters if you have a large enough codebase. I do wish Scala native was further along, and if the language was going to get simpler, it would focus more on native speed than compilation time. Quite a few Scala compiler releases have made big strides in compilation times, to the extent that java vs scala compilation times I feel are the same or well worth the cost, so this has been and will be an ongoing effort. The Java group is also working on being native, so we will likely see Java native when it gets there.
IIUC, this is about AOT but still using the JVM. This is a fix for slow warm-up, but in longer running processes the eventual behavior isn't different from JIT. Which is important in its own right, but very different from a hypothetical 'Java native' in the sense of 'Scala native', without a JVM at all.
So Clojure is slow to start and Scala is slow to compile. Is Scala also slow to start? As compared to a Java program to performs the equivalent functions.
Clojure doesn't have slow start-up, Leiningen and Boot-clj make it feel slow. But once you start a REPL you can keep it running. I keep mine running for weeks.
I know. I've often heard this, but for me that's a bit tricky b/c I'm worried that my current definition of "foo" is not what it is in my .clj file b/c I may have monkey-patched it in the REPL and forgot I did that.
Huzzah! Maybe this is already in the works, but I would love to see a compiler that stays hot in RAM and works incrementally.
RAM is cheap nowadays, and the number one reason I want a fast compiler is to speed up my feedback cycle between making a change and evaluating the effects of the change. So I want something that squats on a large chunk of RAM and works in parallel with me, updating with every edit I make.
That's a good start, but won't the compiler still be batch-oriented? By which I mean doing a ton of I/O and activity only when I hit go?
I'm talking about something much more real time. I want the parser to keep everything hot, updating every time I hit a key. And whenever I get to anything that the parser approves of, downstream stages, which also keep everything hot, dynamically update all of their structures, revising object files as they go.
Batch orientation is great, and I understand why compilers started there. But I want something that moves beyond that. It's sort of the difference between a batch-oriented rendering system like TeX or early word processing versus a modern WYSIWYG word processing setup, where we burn resources prodigiously to give users much faster feedback loops.
I'd like to see a Scala compiler that works more like a proof assistant like Coq. I want the compiler's job to be to help me quickly write correct code. That should be first-class functionality. Batch-compiling the same source files over and over again should be just a neat trick that the compiler can do on the side.
This is the compiler from Scala to Java bytecode. It does type checking, some optimizations (like inlining, removing some instances of boxing, or converting tail-recursive functions to loops), and converts all Scala-specific features to some Java constructs (e.g. traits become interfaces, classes etc.; lazy values become methods and accompanying values; lambdas get lifted to the class they are defined in).
The JIT works on Java bytecode. Java also has a similar compiler (javac) from Java -> bytecode.
They are talking about compilation of source code to Java byte code, i.e. the same kind of compiler as javac from the JDK. (There is also Scala Native, but that doesn't seem to be what this is about.)
Writing a reasonable compiler would need a reasonable language, no? Scala had its chance in the last 13 years now it is time for this abomination to go.
Funny you should mention Kotlin, as it's the result of JetBrains doing just that: developing their own language to write their apps. Mozilla represents another successful example with Rust.
I'm of the opinion that developing new, purpose-built languages (or extending existing ones) is a strategy that isn't given as much consideration as it should.
1) Twitter certainly has the right engineers to do this.
2) If they can speed up Scala compile times by 5 minutes a day (extremely conservative -- compilation times tend towards 1s per file), across even 1,000 developers (Twitter had around 2000 when I was there) that immediately funds a 5+ person development team. This isn't the sort of thing you throw a huge team at, anyway.
3) It helps with recruiting talent who is looking to work in environments that have strong type safety, or who just want to work for companies willing to invest time in making their software engineers more productive.
That's... not true. You mention Kotlin, that was developed by JetBrains as a better language to develop Intellij and plugins.
Hack/HHVM was developed by Facebook to modernize PHP.
There are plenty of cases where organizations successfully developed their own languages/compilers that gave them advantages. It's insane only if you have no good reason to do it. This sounds like they have a need: compile times are slow.
In 2003 PHP was the most sensible choice for rapid prototyping of a web app and for early facebook it was more important to iterate and experiment than build it right. Even today it still holds its own, but there are plenty of alternatives (rails, node.js, ...)
You're playing with words. They are creating their own language which is a subset of Scala, and they decide which features to keep based on whether they are fast enough to compile.
The point is: developing your own language to ship your apps is crazy.
I think there is a significant difference in that developing g your own language requires a rewrite to use new language, not a tweak to existing code..
If you insist on them "creating a new language" then say that they created a new language to speed up development, not to ship their apps. They were already shopping their apps. This is an optimization that helps them speed up development on their existing codebase.
They are already diverging from Scala since they are only implementing a subset of it:
> We are planning to start small with a trivial subset of Scala and then gradually add features, carefully measuring their impact on compilation performance
It will be a while before they reach parity with Scala, and I'll bet the project will be canceled before they even come close.
Sounds like they are intending it as a research effort and hope to make progress with the entire Scala community, not fork it.