In practice things are much more nuanced than what you learn in school and textbooks.
Sometimes you have a convenient library that sort of does what you want, but it does a lot more. Do you just use it, or do you re-implement only the subset that you need which can be optimized to run much faster?
That's not an algorithm question but more of a software engineering tradeoff between impact (how badly your users/business need it faster), resources and priority (how much time you can spend on optimization), and whether you want to maintain that code (as opposed to calling the library making it somebody else's problem). Sometimes the correct thing to do is really to call the slower library instead of writing your own highly optimized routines.
In this case of terminal emulation, apparently the folks at Microsoft wasn't aware of the faster solution, which you could say is an algorithm issue (but that's kind of stretching things a bit -- you surely wouldn't see terminal emulation in an algorithm in a textbook, and the fact that one has memorized the textbook algorithms for an interview doesn't automatically mean they would figure out a better way of emulating a terminal. Presumably Microsoft does some whiteboarding on algorithms as well but that didn't prevent this fiasco from happening). Anyway the concerns I mentioned above are probably still relevant here (the directdraw run thing was a convenient and slow library that apparently did much more than they needed).
Algorithm "skills" are probably overrated in the sense that people can memorize textbook algorithms and their analyses all they want, but real world problems are often more complicated than that, and those "skills" don't necessarily translate/apply. For one, there's no general way to prove lower bounds, so a less imaginative programmer might just assume their inefficient implementation is all that is possible, until somebody else points out a better method. People are getting technical interviews wrong if (as interviewers) they ask standardized algorithm questions -- the candidates expect them and prepare for them, memorizing them if need be. But as the interviewer, ideally they'd want to be able to find the candidate who can discover a good solution for a novel problem they never saw or thought about before.
I'd further claim that while some of these skills can be learned and improved through training, there's a different "ceiling" for everyone since a lot of the intuition and imaginative aspects of problem solving can't really be taught or acquired. I've done a couple years of competitive programming in my younger years, and I can clearly observe that, when pressed with hard problems, there's a clear difference between how well people respond. The model in the original article assumes that these kind of skills come with experience, but in my experience that's mostly not true if you're dealing with "hard-ish" problems like how to vastly optimize text layout in terminals.
> Algorithm "skills" are probably overrated in the sense that people can memorize textbook algorithms and their analyses all they want ...
Memorizing algorithms has very little to do with algorithm skills. You can memorize any algorithm textbook you like, then go take part in an algorithm competition and see how far that will get you (spoiler alert: not far). If you are good at algorithms, you have the ability to construct efficient solutions to problems you have never seen before. And by "construct" I don't mean pattern-matching to previously-memorized algorithms, I mean writing something that didn't exist before you wrote it.
> People are getting technical interviews wrong if (as interviewers) they ask standardized algorithm questions ... But as the interviewer, ideally they'd want to be able to find the candidate who can discover a good solution for a novel problem they never saw or thought about before.
I agree with you 100% here. Good algorithm questions are questions that can't be solved by memorized answers.
> I'd further claim that while some of these skills can be learned and improved through training, there's a different "ceiling" for everyone since a lot of the intuition and imaginative aspects of problem solving can't really be taught or acquired [...] The model in the original article assumes that these kind of skills come with experience, but in my experience that's mostly not true if you're dealing with "hard-ish" problems like how to vastly optimize text layout in terminals.
Again I agree with you 100%. This also why I went in a different direction with my comment compared to the expectations set in the article. As you noted, the writer of the article expected everyone to learn these skills. In contrast, I said that it's ok for different people to have different skillsets.
> In practice things are much more nuanced than what you learn in school and textbooks. Sometimes [...]
This argument is presented way more often than it actually holds water. Yes, sometimes things are nuanced and complicated and whatnot, but we don't have to look at hypotheticals, we can just look at this actual case that we have right here. In this case things weren't nuanced and complicated and whatnot justifying a 40x performance drop when rendering colored text. In this case things really were simple: turns out this was just a horribly inefficient solution (like Many products written by Microsoft are). Furthermore, turns out Microsoft devs truely were incapable of recognizing that a few simple optimizations would deliver orders of magnitude better performance without breaking anything, and without introducing unnecessary complexity. We know this, because Microsoft devs called the prospect of optimizing this performance a "doctoral research project", and then some guy did it in his free time over a few weekends.
> Memorizing algorithms has very little to do with algorithm skills. You can memorize any algorithm textbook you like, then go take part in an algorithm competition and see how far that will get you (spoiler alert: not far). If you are good at algorithms, you have the ability to construct efficient solutions to problems you have never seen before. And by "construct" I don't mean pattern-matching to previously-memorized algorithms, I mean writing something that didn't exist before you wrote it.
Yes that's what I was trying to say. When people talk about "algorithm skills" it's not clear what whether they mean only learning the stuff from the textbook, or whether they mean the ability to improvise on top of that. Sometimes I suspect people don't know the difference themselves either. For example in the context of technical interviews, if the interviewer chooses a typical question like implementing binary search, reversing a linked list etc., they are most likely going to filter for those who memorized (to some extent) the solutions instead of those with ability to create novel solutions.
So about "algorithm skills are useless and shouldn't be used when interviewing", I guess my point is, it's useful, but it really depends on the interviewer not to screw it up by picking a standard question that can easily prepared for beforehand.
> In this case things weren't nuanced and complicated and whatnot justifying a 40x performance drop when rendering colored text. In this case things really were simple: turns out this was just a horribly inefficient solution (like Many products written by Microsoft are).
My original point was that the reason for the performance drop likely wasn't related to algorithms. In practice, algorithms rarely are the issue. To clarify, in practice, what you use 90% of the time are plain arrays and indices / pointers. Occasionally you'll use a hashmap to lookup (distributed) or cache something.
Performance drops in the real world are most often explained by lack of understanding what certain library calls do, lack of understanding what are their performance characteristics, and/or lack of understanding what needs to be done to satisfy the requirements, or what the requirements should even be.
In most real-world programming domains (probably including the issue discussed here), most problems look somewhat like the following example: Someone reads byte-by-byte from some input stream by calling the read() system call, instead of using a buffered variant (like stdio fgetc()). They might not even notice that they are doing this, because the calls are hidden below a library abstraction. Now for each byte read, there are ball-park 1000 cycles wasted due to system call overhead and other related performance decreasing effect. So there is an overhead on the order of 1000x maybe. This leads to terrible performance, but it's not related to "algorithms" (asymptotic complexity is not the issue here, we're just pumping data).
> My original point was that the reason for the performance drop likely wasn't related to algorithms. In practice, algorithms rarely are the issue. To clarify, in practice, what you use 90% of the time are plain arrays and indices / pointers. Occasionally you'll use a hashmap to lookup (distributed) or cache something.
This doesn't make any sense. It sounds like you're implying that algorithms are not... using plain arrays and pointers? But that's exactly how algorithms are usually constructed. Even the HashMaps you mentioned are typically implemented by using arrays and pointers. If you use arrays and pointers to write something that, let's say, has O(n^3) time complexity when the same operation could be written with O(n) time complexity, then you have an algorithm problem.
> Performance drops in the real world are most often explained by lack of understanding what certain library calls do, lack of understanding what are their performance characteristics, and/or lack of understanding what needs to be done to satisfy the requirements, or what the requirements should even be.
This is such a weak excuse. If you don't understand why a library call is crushing your performance, then stop using that library. People typically use libraries to do the tiniest of tasks, tasks that could often be solved with custom code that can be written in an hour. As the result of this attitude we have terminals that have performance issues... for rendering colored text. That's insane! Sometimes it feels like this whole field is just people gluing together pieces made by other people, with no understanding whatsoever about how everything works. There's no excusing for this behavior.
> It sounds like you're implying that algorithms are not... using plain arrays and pointers?
Ok, but no, I did not mean to imply that. What I mean is that most programs are either straightforward linear scans over whole arrays, or working on data that was specifically requested by some index requested by a separate entity. Using a hashmap to lookup items faster is almost at the ceiling of complexity of most real-world programs.
And really, that's sufficient to do most things at max speed. "Algorithms" (meaning stuff like interval trees, Fenwick trees, LCA computation, square root decomposition...) is irrelevant academic exercise for the vast majority of applications (including kernels and games).
At the stage of using the most optimal algorithm (which is probably so simple that it doesn't deserve that name) we're still frequently dealing with factors of 10x to 1000x of inefficiencies - optimal code from an algorithms / asymptotic standpoint but just badly implemented.
> Sometimes it feels like this whole field is just people gluing together pieces made by other people, with no understanding whatsoever about how everything works.
That's exactly how it is. To people who want to do better I recommend to check out programmers like Casey Muratori (original poster of the github issue), and to check out Handmade Network community for example.
> There's no excusing for this behavior.
They didn't know better. Not particularly flattering though.
> Ok, but no, I did not mean to imply that. What I mean is that most programs are either straightforward linear scans over whole arrays, or working on data that was specifically requested by some index requested by a separate entity. Using a hashmap to lookup items faster is almost at the ceiling of complexity of most real-world programs.
Ok, now I see what you mean. I mean, I disagree with you, but I understand your point now.
> And really, that's sufficient to do most things at max speed. "Algorithms" (meaning stuff like interval trees, Fenwick trees, LCA computation, square root decomposition...) is irrelevant academic exercise for the vast majority of applications (including kernels and games).
Yes, all of the examples you provided are irrelevant for your average project. I don't mean that your average project would run into a need to optimize range queries with Fenwick trees. I mean your average project will run into situations where someone needs to write a simple O(n) list traversal, and they somehow manage to write it as O(n^2) because they have no understanding or interest towards algorithms. If the organization has staffed at least some people who care and have algorithm skills, they will spot these "low hanging fruit inefficiencies" and fix them.
Sometimes you have a convenient library that sort of does what you want, but it does a lot more. Do you just use it, or do you re-implement only the subset that you need which can be optimized to run much faster?
That's not an algorithm question but more of a software engineering tradeoff between impact (how badly your users/business need it faster), resources and priority (how much time you can spend on optimization), and whether you want to maintain that code (as opposed to calling the library making it somebody else's problem). Sometimes the correct thing to do is really to call the slower library instead of writing your own highly optimized routines.
In this case of terminal emulation, apparently the folks at Microsoft wasn't aware of the faster solution, which you could say is an algorithm issue (but that's kind of stretching things a bit -- you surely wouldn't see terminal emulation in an algorithm in a textbook, and the fact that one has memorized the textbook algorithms for an interview doesn't automatically mean they would figure out a better way of emulating a terminal. Presumably Microsoft does some whiteboarding on algorithms as well but that didn't prevent this fiasco from happening). Anyway the concerns I mentioned above are probably still relevant here (the directdraw run thing was a convenient and slow library that apparently did much more than they needed).
Algorithm "skills" are probably overrated in the sense that people can memorize textbook algorithms and their analyses all they want, but real world problems are often more complicated than that, and those "skills" don't necessarily translate/apply. For one, there's no general way to prove lower bounds, so a less imaginative programmer might just assume their inefficient implementation is all that is possible, until somebody else points out a better method. People are getting technical interviews wrong if (as interviewers) they ask standardized algorithm questions -- the candidates expect them and prepare for them, memorizing them if need be. But as the interviewer, ideally they'd want to be able to find the candidate who can discover a good solution for a novel problem they never saw or thought about before.
I'd further claim that while some of these skills can be learned and improved through training, there's a different "ceiling" for everyone since a lot of the intuition and imaginative aspects of problem solving can't really be taught or acquired. I've done a couple years of competitive programming in my younger years, and I can clearly observe that, when pressed with hard problems, there's a clear difference between how well people respond. The model in the original article assumes that these kind of skills come with experience, but in my experience that's mostly not true if you're dealing with "hard-ish" problems like how to vastly optimize text layout in terminals.