I found this talk to be great. It goes through the history of OOP and how some of the ideas for the more modern ECS were embedded in the culture at the formation of OOP in the 1960s to 1980s but somehow weren't adopted.
It was pretty clear, even 20 years ago, that OOP had major problems in terms of what Casey Muratori now calls "hierarchical encapsulation" of problems.
One thing that really jumped out at me was his quote [0]:
> I think when you're designing new things, you should focus on the hardest stuff. ... we can always then take that and scale it down ... but it's almost impossible to take something that solves simple problems and scale it up into something that solves hard [problems]
I understand the context but this, in general, is abysmally bad advice. I'm not sure about language design or system architecture but this is almost universally not true for any mathematical or algorithmic pursuit.
> I'm not sure about language design or system architecture but this is almost universally not true for any mathematical or algorithmic pursuit.
I don't agree. While starting with the simplest case and expanding out is a valid problem-solving technique, it is also often the case in mathematics that we approach a problem by solving a more general problem and getting our solution as a special case. It's a bit paradoxical, but a problem that be completely intractable if attacked directly can be trivial if approached with a sufficiently powerful abstraction. And our problem-solving abilities grow with our toolbox of ever more powerful and general abstractions.
Also, it's a general principle in engineering that the initial design decisions, the underlying assumptions underlying everything, is in itself the least expensive part of the process but have an outsized influence on the entire rest of the project. The civil engineer who halfway through the construction of his bridge discovers there is a flaw in his design is having a very bad day (and likely year). With software things are more flexible, so we can build our solution incrementally from a simpler case and swap bits out as our understanding of the problem changes; but even there, if we discover there is something wrong with our fundamental architectural decisions, with how we model the problem domain, we can't fix it just by rewriting some modules. That's something that can only be fixed by a complete rewrite, possibly even in a different language.
So while I don't agree with your absolute statement in general, I think it is especially wrong given the context of language design and system architecture. Those are precisely the kind of areas where it's really important that you consider all the possible things you might want to do, and make sure you're not making some false assumption that will massively screw you over at some later date.
> ... it is also often the case in mathematics that we approach a problem by solving a more general problem and getting our solution as a special case.
This is a really good point. LLL and "Feynman's" integral trick come to mind. There are many others.
I got it in my head that this doesn't apply to NP-complete problems so should be discounted. When trying to "solve" NP-complete problems, the usual tactic is to restrict the problem domain into something tractable and then try to branch out other regions of applicability.
> Those are precisely the kind of areas where it's really important that you consider all the possible things you might want to do, and make sure you're not making some false assumption that will massively screw you over at some later date.
I will say that abstraction is its own type of optimization and generalization like this shouldn't be done without some understanding of the problem domain. My guess is that we're in agreement about this point and the talk essentially makes this argument explicitly.
So, this is pretty difficult to test in a real-world environment, but I did a little LLM experiment. Two prompts, (A) "Implement a consensus algorithm for 3 nodes with 1 failure allowed." vs. (B) "Write a provably optimal distributed algorithm for Byzantine agreement in asynchronous networks with at least 1/3 malicious nodes". Prompt A generates a simple majority-vote approach and says "This code does not handle 'Byzantine' failures where nodes can act maliciously or send contradictory information." Prompt B generates "This is the simplified core consensus logic of the Practical Byzantine Fault Tolerance (PBFT) algorithm".
I would say, if you have to design a good consensus algorithm, PBFT is a much better starting point, and can indeed be scaled down. If you have to run something tomorrow, the majority-vote code probably runs as-is, but doesn't help you with the literature at all. It's essentially the iron triangle - good vs. cheap. In the talk the speaker was clearly aiming for quality above all else.
So to be clear, I love all of those languages and wish the design space of prototype-based inheritance was explored more.
Having said that: why would he though? In this particular talk he's trying to argue to people who program in C++ why the historical C++ architectures are limiting it, he's not trying to convince anyone to switch languages. So those languages aren't his audience.
> I understand the context but this, in general, is abysmally bad advice.
The context, for the record, is inventing good general software architectures (and by extension generalized programming paradigms) for everyone to use. I agree with you that this is bad advice for generally fixing things, but for this context it absolutely makes sense to me. The hard problems are more likely to cover all the walls you'd bump into if you start from the oversimplified ones, so they are much better use-cases to battle-test ideas of what good architectures or programming paradigms are.
I know a two-and-a-half hour video is a hard sell for most people, but I found this talk to be absolutely fascinating. It's not yet another tired “let's all shit on OOP just for the sake of it”-type thing—instead, it's basically nothing but solid historical information (presented with evidence!) as to how “OOP”, as we now know it, came to be. The specific context in which these various decisions were made is something that nobody ever cares to teach, such that it's basically long-since forgotten today—yet here it is, in an easily-digestible format!
Amusingly, an hour into the video he complains about information being hidden behind hours of video. It would be a better paper, but apparently he hasn't written or put one out there. Probably a 20-30 minute read instead of 2.5 hours (or 1.25 since I'm running it at double speed).
To be fair, though, the video has an uncommonly high (by modern standards!) information density/signal-to-noise ratio—there's minimal filler, and it's very straightforward and to-the-point with regards to its subject matter!
13:45 -- "He's like, inheritance was like really powerful, but people just didn't know how to use it. Novices and experts apparently both couldn't use it, right. It was just uh you know, it's really good, but no one can figure out how to use it, I guess. Uh so that's a little bit weird."
I don't know if it matters to you, but the "video" is just a recording of a conference talk. It wasn't made with the sole intention of making a "video". I agree a text format version of the same information would be useful.
same difference? it just translates to "information being hidden behind hours of conference talks"?
isn't that arguably even worse? imagine the talk was not recorded, and the only way to learn this is to catch the speaker at a conference and listen to him talk?
I thought it was very interesting about how Alan Kay and Bjarne Stroustrup may have been applying wisdom from their old fields of expertise and how that affected their philosophy.
There is an appeal to building complexity through Emergence, where you design several small self-contained pieces that have rich interactions with each other and through those rich interactions you can accomplish more complex things. Its how the universe seems to work.
But I also think that the kinds of tools that we have make designing things like this largely impossible. Emergence tends to result in things that we dont expect, and for precise computation and engineering, it feels like we are not close to accomplishing this.
So the idea that we need a sense of 'omniscience' for designing programs on individual systems feels like it is the right way to go.
Another angle I was thinking about, re the need for omniscience: Physical systems seem compelled to play by these object oriented rules, where encapsulation is the norm, and information must be transmitted, and locality dominates. But if we are to try to emulate that ethos in our computer programs, one thing the OOP paradigm seems to glaze over is that you aren’t allowed to _only_ write the ‘atoms’ of that universe - we also have to write the ‘laws of physics’ themselves (if you follow the analogy). And what is more global and all-touching than the laws of physics?
So if you look at it through that lens, the need for a little omniscience seems natural. The mistake was in thinking that the program was identified with the objects that the laws govern, when really you have to cover those AND the laws themselves.
The universe may work this way, but we're not God, and modes of computation that work like this still inevitably be impossible for us to predict or comprehend. This may be interesting if you're trying to run simulations (remember the point about SIMULA?) but it's not something you could use to accomplish specific ends, I expect.
I had no idea Thief, one of my favourite games, was built with an ECS-like architecture. Two articles with more interesting details about Thief (I especially love its "temporal" CSG world model):
I skipped a fair chunk of the middle of this video as I really wanted to get to the Sketchpad discussion, which I found very valuable (starting around 1:10).
I think Casey was fairly balanced, and emphasized near the end of the talk that some of the things under the OOP umbrella aren't necessarily bad, just overused. For example, actors communicating with message passing could be a great way to model distributed systems. Just not, maybe, a game or editor. Along similar lines, I love this old post "reconstructing" OOP ideas with a much simpler take similar to what Casey advocates for:
Waaait, but I thought OOP was carefully crafted to "scale with big teams", and that's why it works so... ahem... "well". Turns out it was just memetic spillover from the creators' previous work?
And we absolutely needed 30-45 minutes to learn that that wasn't why it was created. The first part is a history of OOP languages to debunk something I'd never heard even claimed until I watched this video. The history was interesting, but also wrong in a few places. It was amusing to hear him talk about Arpanet being used in the 90s, though.
If I get bored with life I'll rewatch and take notes, that was the main one that made me chuckle and stuck with me. It was details around Lisp and a couple other things that were outside his explicit research scope (he specifically researched C++, Smalltalk, and Simula per his blog). Like claiming that everything in Lisp was based on lists (even in 1960 that wasn't true).
I'd just expect more from someone who takes 30+ minutes to debunk a claim that doesn't matter and most people have never heard to be more particular in getting details correct.
On the first part of the video, to be more constructive, it does not matter why a language or tool or whatever was made. The claim, that he debunks, is that OO languages were made to be good for working with teams. Whether it was made for that is immaterial, and no one needs 30 minutes of mostly historically correct video to get to The Truth(tm) of the matter. What's more interesting, and he never bothered to get into, is whether OO is actually good for working with teams (I can go either way, I've dealt with enough garbage OO programs to know that OO itself does not help things, but enough good OO programs to know that it can help things).
To anyone who has not yet watched the video, the second half is interesting, the first half is mostly a waste of time.
I dunno man even just learning that Bjarne thought Simula's classes were cool specifically because of the domain of what he was working on—and learning that he ran into the same “unity build” problem that anyone who's worked on a large C++ project has encountered, years before literally anyone else in the world had—was fascinating, something I'd never heard before, and very interesting context in the broader scope of “OOP.”
This is in the talk, he explicitly says that its often brought up that "OOP is made for large teams" "you're not using it as intended" "its not made to model your domain hierarchy" etc etc. The first 30 minutes is his reaction to that, disproving it.
Whether thats true or interesting is a different question, but its explicitly stated in the video, at the start, before he goes into the history.
1) 'AI Overview "No, it's not strictly true that Object-Oriented Programming (OOP) is exclusively made for large teams, but it does offer significant advantages in such environments."'
2) 'Casey Muratori -- The Big OOPs: Anatomy of a Thirty-five'
Object-oriented programming is popular in big companies, because
it suits the way they write software. At big companies, software
tends to be written by large (and frequently changing) teams of
mediocre programmers. Object-oriented programming imposes a
discipline on these programmers that prevents any one of them from
doing too much damage.
He spends the first half of his presentation debunking the meme that OO was created for working with teams, not that it happens to be good for working with teams. Your quoted bit is not evidence of someone making the first claim, only the second.
This is not moving the goal posts. Different people making the same claim may use different phrasing, and Google very much has recency bias. By searching for something slightly different we deprioritize the video we’ve already seen.
Muratori's statement (that he debunks in his talk): OO was created for teams.
Graham's statement: OO is useful for teams.
Those are distinct concepts, there's lots of evidence of statements like Graham's out there, and you've helpfully provided one. What igouy is asking for is evidence of the former claim.
The only place "team" shows up in the transcript is here:
> So. Language design is not at all the same kind of work it was thirty years ago, or twenty years ago. Back then, you could set out to design a whole language and then build it by your own self, or with a small team, because it was small and because what you would then do with it was small.
Which is not about OO at all. Got an actual quote or is this link really just an interesting but irrelevant non sequitur?
EDIT: For those coming in later who don't feel like clicking random Youtube links, parent post is referencing Steele's talk "Growing a Language".
You know that thing people do where they say the same thing using different words? You’ll have to comprehend the words rather than merely pattern matching on a specific phrase.
In particular, note how he talks about growing the language by adding new things to it which are like the existing parts of the language. Contrast that with APL, where the existing parts of the language all had funny symbols, but new things added by the user needed alphanumeric names. In Java the language gives you a bunch of classes and interfaces and whatnot, and you extend the language by defining your own classes and interfaces. You don’t have to do this yourself, of course, since you can include libraries alongside your code. Those libraries can extend the language by defining new classes and interfaces.
As he says:
43:54 Back then, you could set out to design a whole language and then
build it by your own self, or with a small team, because it was small
and because what you
44:02 would then do with it was small. Now programs are big messes
with many needs.
44:07 A small language won’t do the job. If you design a big language
all at once and then try to build it all at once, it will
44:15 fail. You will end up late and some other small language will
take your place.
It's a remarkable stretch to go from those words to "OO was created for working with teams." It is neither implicit nor explicit in the talk and I don't know why anyone would make the claim you are making.
EDIT: You seem to be conflating the two ideas still. OO being created for teams is a different claim than it being good for teams. At most, you could stretch Steele's talk to the second, but not to the first.
True, he doesn't come right out and say the words. But don’t lose sight of the context. This is the big Keynote speech at a conference called OOPSLA, or “Object-Oriented Programming, Systems, Languages & Applications”, in 1998. It is safe to say that the audience has heard of Java by now. They already know that the core language design choice made during the creation of Java was to make it object—oriented. Objets are everything and everywhere in Java. Even the simplest “Hello World” program in Java has to be written as a class with a a Main method.
This talk tells us why he and the others at Sun made that choice. He says right there that he wants Java to enable people to write large programs. He specifically contrasts it with the small languages and small programs of the past, the kind that were invariably written by individuals or small teams.
This is what he believes OOP is good for, and why researchers have been studying it for so long. He is reinforcing the belief of the attendees that OOP in general, and languages like Java specifically, are a panacea created for the explicit purpose of letting engineers work more efficiently together on large, complex systems.
We know from the historical record that early researchers did not have this belief. We know that many practitioners of the 90s and 2000s did. This talk may not be the genesis of that belief, but it is proximate to it.
The fault for this myth will almost certainly be marketing. The best place to look probably won't be something for those in the trenches, but more targeted at management types.
24:50 -- "A lot of people talk about these things. They talk about those compile time hierarchies and all that sort-of stuff; and they say - like you know, here's the thing that you just don't understand, it's all about large teams …"
Is there some example that you can point me towards, where a lot of people are saying compile time hierarchies are all about large teams?
(I suppose Ada is an example of design for programming-in-the-large.)
Is that a source given during "The Big Oops: Anatomy of a Thirty-Five-Year Mistake"? Otherwise the author might reasonably say that isn't what they meant.
A lot of people have said a lot of things about OOP for decades. So looking at the context in which something was said is an ordinary sanity check.
"Unfortunately, inheritance — though an incredibly powerful technique — has turned out to be very difficult for novices (and even professionals) to deal with." Alan Kay, The Early History of Smalltalk, page 82
That's taken from a section which reflects on introducing programming to children in the summer of '73 —
In part, what we were seeing was the "hacker phenomenon", that for any given pursuit, a particular 5% of the population will jump into it naturally, while the 80% or so who can learn it in time do not find it natural.
… it is likely that this area is more like writing than we wanted it to be. Namely, for the "80%", it really has to be learned gradually over a period of years in order to build up the structures that need to be there for design and solution look-ahead.
-
Here's how that Alan Kay quote is used in The Big OOPs —
13:47 -- It's because 10 years earlier, he was already saying he kind of soured on it. He's like, inheritance was like really powerful, but people just didn't know how to use it. Novices and experts apparently both couldn't use it, right. It was just uh you know, it's really good, but no one can figure out how to use it, I guess. Uh so that's a little bit weird.
-
Not "kind-of-soured on it" one page later —
There were a variety of strong desires for a real inheritance mechanism from Adele and me, from Larry Tesler, who was working on desktop publishing, and from the grad students. page 83
-
Not "kind-of-soured on it" but wanting a "comprehensive and clean multiple inheritance scheme" —
A word about inheritance. … By the time Smalltalk-76 came along, Dan Ingalls had come up with a scheme that was Simula-like in it's semantics but could be incrementally changed on the fly to be in accord with our goals of close interaction. I was not completely thrilled with it because it seemed that we needed a better theory about inheritance entirely (and still do). … But no comprehensive and clean multiple inheritance scheme appeared that was compelling enough to surmount Dan's original Simula-like design. page 84
I'm quite interested in this talk. Haven't finished yet, just watched the first part last night.
But I gotta say, I find the graphical background (the blurry text around the edge of the screen that's constantly moving and changing) supremely annoying, not to mention completely unnecessary.
Dear presenters and conference producers: please, please don't do that.
Wow, definitely much more than I expected from the title. Really enjoyed the surprise mini-talk about the origin of entity-component-system in the Q&A section as well.
One thing between 1960s Sutherland and 1990s Looking Glass he left out I think was column-oriented databases (70s and 80s).
That might have been important for the performance aspects that drove the resurgence in ECS, though I know he's focused more in this talk on how ECS also improves the structure for understanding and implementing complex systems: in the 70s and early 80s memory latency probably hadn't begun diverging from instruction rate to such an extreme degree, but in disks it was always a big issue.
Also would like to hear more about Thinglab and if it had some good stuff to it.
For those unaware, Casey Muratori started a project called Handmade Hero in 2014 to build a complete game from scratch while livestreaming the entire process, with the goal of showing people not just how, but why, rolling your own engine (hence the "handmade" part) is better than relying on Unity, Unreal, or some other leaky abstraction. He even solicited pre-orders for the finished product, IIRC.
Ten years later, he has no game, only a rudimentary, tile-based dungeon-crawler engine, and reams of code he's written and re-written (as API sands shifted beneath his feet), and the project seems to be permanently on hiatus now. Thus, Casey inadvertently proved himself wrong, and the conventional wisdom (use an existing engine) correct.
As far as OOP goes, 45 years has shown that it makes developers highly productive, and ultimately, as the saying goes, "real heroes (handmade or otherwise) ship." Casey's company was founded 20 years ago, and he's never shipped a software product.
He complains often about software getting slower, which I agree with. Yet how many mainstays of Windows 95/98 desktop software were written in a significantly OO style using C++ with MFC?
I think it's important to note a couple of things about this.
First, Casey offers refunds on the handmade website for anyone who purchased the pre-order. Second, the pre-orders were primarily purchased by people who wanted to get the in-progress source code of the project, not people who just wanted to get the finished game. I'm not aware of anyone who purchased the pre-order solely to get the finished game itself. (Though it's certainly possible that there were some people.) Whether that makes a difference is up to the reader I suppose, since the original versions of the site didn't say anything about how likely the project was to finish and did state that the pre-order was for both the source-code and the finished game.
Second, the ten-year timeline (I believe the live streams only spanned 8 years) should be taken with the the note that this is live streaming for just one hour per day on weekdays, or for two hours two or three times a week later in the project. There's roughly 1000 hours of video content not including the Q&As at the end of every video. The 1000 hours includes instructional content and white board explanations in addition to the actual coding which was done while explaining the code itself as it was written. (Also, he wrote literally everything from scratch, something which he stated multiple times probably doesn't make sense in a real project.)
Taking into account the non-coding content, and the slower rate of coding while explaining what is being written, I'd estimate somewhere between 2-4 months of actual (40hr/week) work was completed, which includes both a software and a hardware renderer. No idea how accurate that estimate is, but it's definitely far less than 10 years and doesn't seem very indicative that the coding style he was trying to teach is untenable for game projects. (To be clear, it might be untenable. I don't know. I just don't see how the results of the Handmade Hero project specifically are indicative either way.)
How much of that is due to the programming practices he espouses, I'm not sure. Ironically, if he went all-in on OOP with Smalltalk, I could see the super productivity that environment provides actually making it harder for him to finish anything, given how much it facilitates prototyping and wheel-reinvention. You see this with Pharo, where they rewrite the class browser (and other tools) every 2-3 years.
But his track record doesn't support the reputation he's built for himself.
> for game projects
That's the problem. Casey holds up a small problem domain, like AAA games, where OOP's overhead (even C++'s OOP) may genuinely pose a real performance problem, and suggests that it's representative of software as a whole; as if vtables are the reason VisualStudio takes minutes to load today vs. seconds 20 years ago.
The article you linked indicates the reason for him not finishing is specifically that he didn't like his game design, which seems orthogonal to coding practices.
He appears to have shipped middleware projects for RAD, and other contract work where he was not in charge of game design.
RAD was what, 15, 20 years ago? What has he released, in terms of proprietary or open source products, since then? Not just games, I mean ANYTHING. Refterm, and... what else? It's not like he was busy with his MSFT or RAD dayjob during this period.
He created Meow Hash somewhat recently and open sourced that. It's not a huge project but it's very useful. A lot of his time goes toward education, his personal projects and contract programming. Not every programmer is dedicated to releasing their own open source or commercial software. I'd bet most programmers don't. Using this as a metric to claim that he has a bad coding approach is ridiculous and laughable. Especially using Handmade Hero as an example... It really reveals your ignorance.
Also, since you care so much, let's see what you've released, smart guy. Preferably code so that we can see how talented you are.
> Also, since you care so much, let's see what you've released, smart guy. Preferably code so that we can see how talented you are.
I'm not the one telling everyone they're doing everything wrong, and did it not occur to you that my perception of what his output ought to have been over that timeframe (especially for someone who rates his own abilities as highly as he does) is informed by my own?
i think it's kinda funny, because Unity is very clearly inspired by some of casey's work
the big one is immediate mode UIs, which casey popularized back in 2005. Unity's editor uses it to this day, and if you do editor scripting, you'll be using it. for in-game UI, they switched to a component-based one, which also somewhat aligns with casey's opinions. and they shipped DOTS, which aligns even more with what he's saying
i think his lack of shipping is mostly because he switched to teaching and has absolutely no pressure to ship, rather than his approach being bad
I can see the argument for using a custom engine if you have specific design goals that can't be met by existing engines, but that seems seems like an edge case. I think 99% of game concepts can probably be done in Unity, Godot, or Unreal.
Meanwhile you could probably surpass Handmade Hero with any off the shelf engine with a tutorial and a few hours' work, or even a project template from an asset store. The biggest problem I have with Handmade Hero is that because Casey is putting so much effort into the coding and architecture up front, the game itself isn't interesting. It's supposed to be a "AAA game" but it's little more than a tech demo.
And that's why you use off the shelf engines - they allow you to put effort into designing the game rather than reinventing the wheel.
> 99% of game concepts can probably be done in Unity, Godot, or Unreal.
The vast majority of developers use these engines, so you would expect the vast majority of games to be stuff that's easy to make within those engines.
With how samey new games are, it's hard to argue that what we see comes close to the full design space of possible interesting games. That's partially developers copying games they've seen work and sell, but it's also developers making what is reasonably easy to make within Unity or Unreal with the resources they have.
I think it's hard to argue that there exists a vast space of untapped design potential for games that can't be realized only because of the limitations of off the shelf game engines. Most people who use custom engines use them for mainstream common game concepts, because they disagree with architectural decisions about the engine itself (most likely the language being used) and they would rather start from scratch than work with the engine.
Handmade Hero can be made in Unity or Godot. So could Braid. I'm actually struggling to think of a game built in a custom engine that's so radically out of pocket design-wise that it needs a custom engine. I'm not arguing that the use case doesn't exist, I'm arguing that most of the time using a custom engine is a matter of convenience and comfort rather than creative expression, and isn't strictly necessary.
A custom engine is never _needed_ technically speaking since COTS engines are often customizable to the point where you can do anything you want with them. That doesn't mean that they don't influence the design of games, though.
There's a talk that Casey gives where he explains how he implemented the movement system for The Witness, in which he shows examples of Unity-based "walking simulator"-type games dealing with limitations of the engine in ways that The Witness was able to totally avoid. This allowed the game's artists to be creative with set design without worrying as much about performance issues or collision bugs, thus potentially opening up more design space.
Here's some praise of The Witness by Fabien Giesen:
> That’s where I am right now. I have never seen another game as cohesive as this. Not even close. Without getting into specifics or spoiler territory, I have never seen a game with such manifest intention behind every single detail, nor one where all the details cohere this well into a whole. This goes for the design and gameplay itself, but also across traditional hard boundaries. Game design and art play off each other, and both of these are tightly coupled with some of the engine tech in the game. It’s amazing. There is no single detail in the game that would be hard to accomplish by itself, but the whole is much more than the sum of its parts, with no signs of the endless compromise that are normally a reality in every project.
> A custom engine is never _needed_ technically speaking since COTS engines are often customizable to the point where you can do anything you want with them
Yeah, no. You can make a lot of things with an engine, and if you don't already have the necessary talent to make a custom engine you probably should just not.
But there are certainly games which would not exist as "Just use COTS" games. The first which comes to mind is Outer Wilds.
Outer Wilds is running a tiny solar system model. Such a model isn't stable for very long even if you had a lot of compute power which a video game console does not, but in Outer Wilds there's an excuse for that [spoiler] the sun is about to explode, it's a time loop game. Still, this is a heavily specialised engine because normal games centre on the camera or player, Outer Wilds can't do that, the model would explode almost immediately if you do that, so the centre is the sun at all times.
I didn't know that. Although it seems as though they're sufficiently far outside what its makers thinks Unity is for that their attempt to go to Unity 5 had big obstacles. Thanks for correcting me!
[Also, another way I was wrong: to make Unity work the centre of the world is the player, and so the universe is implemented in reverse, your orbit around the sun is calculated with you at the middle, this works correctly in our actual Einsteinian universe - no position is privileged, there is no "center", but it would be crazy to do the maths, for the short time Outer WIlds needs it works well enough with their simple Newtonian physics model]
This video contains many serious misrepresentations. For example, it makes a claim that Alan Kay only started talking about message-passing only in 2003 and that it was a kind of backpedaling due the failures of the inheritance-based OOP model. That is a laughable claim. Kay had given detailed talks discussing issues of OOP, dynamic composition and message-passing in mid-80s. Some of those talks are on YouTube:
The dates are the dates of the sources, he says in the talk he wasn't going to try to infer the dates these ideas were invented. Also he barely talked about Alan Kay.
From the video: "It's like, yeah, he said that in 2003, right? He said that after a very long time. So why did he say it? It's because 10 years earlier, he was already saying he kind of soured on it."
Casey says he “didn’t really cover Alan Kay” https://youtu.be/wo84LFzx5nI?t=8651 To me that says that Kay wasn’t a major focus of his research. That seems to be reflected in the talk itself: I counted 6 Bjorne sources, 4 Alan Kay sources, 2 more related to Smalltalk, and about 10 focused on Sketchpad, Douglas Ross, and others. By source count, the talk is roughly 18% about Alan Kay and 27% about Smalltalk overall - not a huge part.
As far as the narrative, probably the clearest expression of Casey's thesis is at https://youtu.be/wo84LFzx5nI?t=6187 "Alan Kay had a degree in molecular biology. ... [he was] thinking of little tiny cells that communicate back and forth but which do not reach across into each other's domain to do different things. And so [he was certain that] that was the future of how we will
engineer things. They're going to be like microorganisms where they're little things that we instance, and they'll just talk to each other. So everything will be built that way from the ground up." AFAICT the gist of this is true, Kay was indeed inspired by biological cells and that is why he emphasized message-passing so heavily. His undergraduate degree was in math + bio, not just bio, but close enough.
As far as specific discussion, Casey says, regarding a quote on inheritance: https://youtu.be/wo84LFzx5nI?t=843 "that's a little bit weird. I don't know. Maybe Alan Kay... will come to tell us what he actually was trying to say there exactly." So yeah, Casey has already admitted he has no understanding of Alan Kay's writings. I don't know what else you want.
I nearly jumped out of my proverbial seat with joy when Casey talked about it being about where you draw your encapsulation boundaries. YES! THIS IS THE THING PEOPLE ARGUING ABOUT OOP NEVER SEEM TO ADDRESS DIRECTLY!
Honestly would love to see a Kay and Casey discussion about this very thing.
I find the discussions about real domain vs OOP objects to be a bit tangential, though still worth having. When constructing a program from objects, there’s a ton of objects that you create that have no real-world or domain analogs. After all, you’re writing a program by building little machines that do things. Your domain model likely doesn’t contain an EventBus or JsonDeserializer; that purely exists in the abstract ‘world’ of your software.
Here’s a thought: Conceptually, what would stop me from writing an ECS in Smalltalk? I can’t think of anything off the top of my head (whether I’d want to or not is a different question). Casey even hints at this.
This is probably the best Casey talk I’ve ever seen and one of the clearest definitions of ‘here is my problem with OOP’. I don’t agree with everything necessarily, but it’s the first time I’ve watched one of these and thought “yep they actually said the concrete thing that they disagree with”.
i'm very curious on how this conference came together. how does a small town in sweden by relative unknowns gather big names like this? is it tied to some elite community that only insiders know?
when you open the transcript view, and start selecting from the bottom of the box (where it says "English (auto-generated)") upward to the start of the text you end up with the whole transcript selected.
I am a writer, and I write words. I do this by reading lots and lots of words. Some of my articles have required me to read 20,000 to 30,000 words or more in a few hours to gather the info in order to tell people what happened or what it meant.
I can read at a few thousand words a minute if I must. It is apparently, I learned in recent years, on the order of 10x as fast as the average reading speed.
(As an aside, a blind friend of mine has his computer set to talk at 600wpm but he could handle faster before his hearing started to fail. This is not some superpower; this speed of comprehension is just a learned skill.)
I can't waste two or three hours watching videos from someone who is, to me, some internet rando just to find out if there is a story here. I'd lose my job.
I need text, plain text that I can zoom and search and stick through Readability or something. Some gen-alpha types litter their text posts with little furry characters who -- I don't know, relay some of the author's inner monologue or something? I don't really understand. It destroys my ability to read it, and I've reached a point of intolerance. There are a million blog posts and comments a day, and if someone deliberately fscks up theirs because of mental health problems, fine, their problem, I'm not wasting my time fighting through it.
If it's worth saying, if it's worth sharing, it's worth writing down.
If it's not worth writing down, it's not worth my time.
To quote Bill Hicks, “I don't mean to sound bitter, cold, or cruel, but I am, so that's how it comes out.”
A "compile-time hierarchy of encapsulation that matches the domain model"? Don't we all call that Typestate these days? Change my mind: Typestate - and Generic Typestate even more so - is just good old OOP wearing a trenchcoat.
It was pretty clear, even 20 years ago, that OOP had major problems in terms of what Casey Muratori now calls "hierarchical encapsulation" of problems.
One thing that really jumped out at me was his quote [0]:
> I think when you're designing new things, you should focus on the hardest stuff. ... we can always then take that and scale it down ... but it's almost impossible to take something that solves simple problems and scale it up into something that solves hard [problems]
I understand the context but this, in general, is abysmally bad advice. I'm not sure about language design or system architecture but this is almost universally not true for any mathematical or algorithmic pursuit.
[0] https://www.youtube.com/watch?v=wo84LFzx5nI&t=8284s