This is called “the dead horse paper” for some reason. There’s been some interesting work since. This isn’t my exact area of research, but I’m pretty well versed in it and I work with one of the authors on this paper so I’m happy to try and answer any questions.
The chief issue here is how do you assign blame to typing violations? Typescript is unsound: since it just erases types, an untyped library that a typed component is using might return something the typed component isn’t expecting leading you to get a type error in typed code. Gradually typed python variants typically are better, though they usually just inspect the shape of the data and don’t check that eg every element in a list is correctly typed as well. This can lead to more unsound behavior. Typed Racket is fully sound, but performance can degrade spectacularly.
There’s been some work reviving the dead horse being beaten in this paper. Lmk if you have questions.
More papers:
Muehlboeck, Fabian, and Ross Tate. “Sound Gradual Typing Is Nominally Alive and Well.” Proceedings of the ACM on Programming Languages 1, no. OOPSLA (October 12, 2017): 56:1-56:30. https://doi.org/10.1145/3133880.
Moy, Cameron, Phúc C. Nguyễn, Sam Tobin-Hochstadt, and David Van Horn. “Corpse Reviver: Sound and Efficient Gradual Typing via Contract Verification.” Proceedings of the ACM on Programming Languages 5, no. POPL (January 4, 2021): 53:1-53:28. https://doi.org/10.1145/3434334.
Lazarek, Lukas, Ben Greenman, Matthias Felleisen, and Christos Dimoulas. “How to Evaluate Blame for Gradual Types.” Proceedings of the ACM on Programming Languages 5, no. ICFP (August 18, 2021): 68:1-68:29. https://doi.org/10.1145/3473573.
> Typescript is unsound: since it just erases types, an untyped library that a typed component is using might return something the typed component isn’t expecting leading you to get a type error in typed code. Gradually typed python variants typically are better, though they usually just inspect the shape of the data and don’t check that eg every element in a list is correctly typed as well
This is not typically what is meant by soundness. Most gradual type systems erase types (see also Python, Ruby, Elixir), and provide ways of overriding the type system's inference and forcing a certain expression to be typed a certain way, even if that type cannot be proven by the type checker.
What makes TypeScript unsound is that, even if you don't override the type checker at all, there are certain expressions and constructs that will produce no compile-time error but will produce a runtime error. This is a deliberate choice by the TypeScript team to make the type checker less correct, but simpler to use and adopt. The Flow type checker took the opposite approach and was sound, but was occasionally harder to use.
I believe most type checkers in other languages tend to hew closer to the TypeScript approach, and favour being intuitive to use over being completely sound. However, they do work in the same way of being compile-time constructs only, and not affecting runtime at all. Python is slightly exceptional here in that the type annotations are available as metadata at runtime, but are not used for type checking unless the user explicitly checks them at runtime. There are a couple of runtime libraries that add decorators that automatically do this runtime type checking, but (a) their usage is fairly rare, and (b) their presence does not change how the type checker behaves at all.
No, this is exactly what is meant by soundness. Using the `Any` type in TypeScript can result in values that have type `integer` being actually strings, which is unsoundness.
Type erasure has nothing to do with soundness - ocaml also erases types at runtime but it is sound. In typescript "any" type, unsafe casting, non null assertions and couple of other constructs are unsound. Gradual typing at package level was addressed by (also unsound) declaration repository (overall successful I'd say - primary reason it won over flow in my opinion).
I have a problem reconciling "sound gradual typing" - is there such thing at all? Sounds like contradiction to me, no? Untyped function's result can be considered "unknown" (as opposed to "any" using typescript terminology) and require runtime checks at call site (as if it was i/o boundary) - that's fine but arguments on the other hand must have some type declaration on its signature to be considered sound. Injecting runtime checks in untyped library for arguments feels nonsensical because if you can do it then you already did static code analysis that did infer argument types and you can shove it into function signature as static type declaration - ie. that's just more sophisticated static inference that doesn't require runtime code injection.
You could argue that there may be some dynamism that cannot be expressed at type level and that requires runtime checks deeper in the untyped code but if it cannot be expressed then by definition it's unsound and frankly who cares about runtime checks nested deeply in untyped code - what's the difference if it blows up at runtime in the middle of that untyped function with type error or some other error?
I’ve looked at Typst and it looks attractive. I’m not quite willing to invest in it until, say, PACMPL lets me use it there. Any idea if that will happen any time soon?
For me, it's been invaluable for notes and memos (and my thesis). When it comes time to submit a paper, I transcribe everything to LaTeX. It's a pain, but it's a lesser pain than transcribing from handwritten notes scattered across three notebooks and several crumpled-up sheets of printer paper in the bottom of my bag. That's my own peculiarity; YMMV.
(Edit: also, ugh, I feel like such a goof for having written that. Thanks for being nice about it.)
"Yes," said Arthur, "yes I did. It was on display in the bottom of a locked file cabinet stuck in a disused lavatory with a sign on the door saying Beware of the Leopard."
There’s 5 to 10 unique things about Tesla, esp at that price point. But yes let’s focus on door handles and how they behave in an edge case of an edge case.
A collision where you lose power. I mean, let’s be real - it’s just when the door loses power, or the switch gets damaged, or… I feel there’s plenty of times when a manual latch is preferable.
Sam will definitely know more about this than I will, so if he contradicts me, listen to him.
If I am not mistaken, the racket language does not convert to CPS during compilation. Instead, when you want to get the continuation, I think you just get a pointer to the stack frame that you want. All I know for sure is that it uses something called a-normal form, which is kind of like SSA in some ways, and that the continuation is 2x 64/32-bit words depending on your architecture.
The main implementation of Racket today is built on top of Chez Scheme, which uses the techniques described by Dybvig that I linked to.
In the earlier implementation of Racket, indeed it doesn't convert to CPS but does use something like A-normal form. There, continuations are implemented by actually copying the C stack.
Some amount of this has got to be due to Covid. I used to tutor a middle school boy, and he was probably two years behind where he should have been. Because of this, he was lacking the foundation that he needed to progress. It was so bad.
Tiktok also gained popularity the same time as covid. Someone needs to do a study on screen time over the last 5 years for school age children. We do know screen time is correlated to poor academic performance.
Undefined behavior does not violate correctness. Undefined behavior is just wiggle room for compiler engineers to not have to worry so much about certain edge cases.
"Correctness" must always be considered with respect to something else. If we take e.g. the C specification, then yes, there are plenty of compilers that are in almost all ways people will encounter correct according to that spec, UB and all. Yes, there are bugs but they are bugs and they can be fixed. The LLVM project has a very neat tool called Alive2 [1] that can verify optimization passes for correctness.
I think there's a very big gap between the kind of reliability we can expect from a deterministic, verified compiler and the approximating behavior of a probabilistic LLM.
Compilers use heuristics which may result in dramatically different results between compiler passes. Different timing effects during compilation may constrain certain optimization passes (e.g. "run algorithm x over the nodes and optimize for y seconds") but in the end the result should still not modify defined observable behavior, modulo runtime. I consider that to be dramatically different than the probabilistic behavior we get from an LLM.
I keep a list of every time I change one of my work passwords with the date it would have expired, and it seems to fluctuate between 2.5 and 3.5 months with little consistency. Some of us used to have it locked so we didn't need to keep changing it, but they reenabled it some time ago and we got confirmation it was for some sort of external security requirements.
People think cyber insurance requirements are hard rules, but they aren't. For the most part, you just need to show effort as it's completely impossible to be 100% compliant with all standards. For example, if you weren't rotating passwords but had proper MFA on your accounts, you're fine. Hell they even have conflicting standards sometimes. I've been through this multiple times when I worked at an MSP. For the most part, leadership just pushes to meet those standards to cya, which makes sense, but as long as you don't demonstrate gross negligence, they'll pay out.
Oo I’m excited for this. The old official language server is fine—it does its job on most of the code bases I’ve worked on, but occasionally I will do something funny that makes the compiler slow down and that pummels the LS performance. I hope this works out some of the kinks that occasionally would make elixir-ls slow.
Nit: there has never been an official LSP implementation until now, only community-authored. Even now no Dashbit employees or language core members are directly involved in this project in an ongoing basis.
IMO that contributes powerfully to the quality of the experiences of using any of the options.
ShortCat uses the accessibility API to put Vimium-style keyboard links on buttons and text fields in any app.
I find that Vimium works faster in Firefox than using ShortCat to click around websites, so I use both, but ShortCat technically should do everything (clicking-wise) that Vimium does.
I wonder if it has to do with accessibility API thread locking. I found a different extension I used to emulate an i3 style environment suffered when I used the Unity game engine. It ended up being limitations of the accessibility API.
This delay is configurable. The delay is there so you can actually type a bit before the UI pops up, and then the UI will be filtered to just your selection.
As I understand it, a payment system is in the works. I would gladly fork over some cash to support ShortCat. Looks like someone recommended Homerow, which appears to be pretty comparable to ShortCat, and it has a one-time purchase option.
The chief issue here is how do you assign blame to typing violations? Typescript is unsound: since it just erases types, an untyped library that a typed component is using might return something the typed component isn’t expecting leading you to get a type error in typed code. Gradually typed python variants typically are better, though they usually just inspect the shape of the data and don’t check that eg every element in a list is correctly typed as well. This can lead to more unsound behavior. Typed Racket is fully sound, but performance can degrade spectacularly.
There’s been some work reviving the dead horse being beaten in this paper. Lmk if you have questions.
More papers:
Muehlboeck, Fabian, and Ross Tate. “Sound Gradual Typing Is Nominally Alive and Well.” Proceedings of the ACM on Programming Languages 1, no. OOPSLA (October 12, 2017): 56:1-56:30. https://doi.org/10.1145/3133880.
Moy, Cameron, Phúc C. Nguyễn, Sam Tobin-Hochstadt, and David Van Horn. “Corpse Reviver: Sound and Efficient Gradual Typing via Contract Verification.” Proceedings of the ACM on Programming Languages 5, no. POPL (January 4, 2021): 53:1-53:28. https://doi.org/10.1145/3434334.
Lazarek, Lukas, Ben Greenman, Matthias Felleisen, and Christos Dimoulas. “How to Evaluate Blame for Gradual Types.” Proceedings of the ACM on Programming Languages 5, no. ICFP (August 18, 2021): 68:1-68:29. https://doi.org/10.1145/3473573.
reply