Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Why is the software engineering interview wildly different from the traditional engineering interview

I have my personal theory.

1) Top companies receive way more applications than the positions they have open. Thus they standardised around very technical interview as ways to eliminate false positives. I think these companies know this method produces several false negatives, but the ratio between those (eliminating candidates that wouldn't make it and missing on great candidates) is wide enough that it's fine. It does leads to some absurd results (such as people interviewed to maintain a library not being qualified despite being the very author) though.

2) Most of these top companies grew at such rates and hiring so aggressively from top colleges that eventually the interview was built by somewhat fresh grads for other fresh grads.

3) Many companies thought that replicating the brilliant results of these unicorns implied copying them. So you get OKR non sense or such interviews.



Yup. And 3) is particularly interesting. Lots of companies actually need to hire people who can get things done and who can build user-friendly software, yet they thought they needed to hire people who could turn any O(N^2) algorithms into O(N) or O(Nlog(N)).

And even for Google, leetcode has become noise because people simply cram them. When Microsoft started to use leetcode-style interviews, there were no interview site, and later there was this Cracking the Interview at most. So, people who aced the interview were either naturally talented or were so geeky that they devour math and puzzle books. Unfortunately, we have lost such signals nowadays.


> yet they thought they needed to hire people who could turn any O(N^2) algorithms into O(N) or O(Nlog(N))

And the great irony is that most software is slow as shit and resource intensive. Because yeah, knowing worst case performance is good to know, but what about mean? Or what you expect users to be doing? These can completely change the desired algorithm.

But there's the long joke "10 years of hardware advancements have been completely undone by 10 years of advancements in software."

Because people now rely on the hardware for doing things rather than trying to make software more optimal. It amazes me that gaming companies do this! And the root of the issue is trying to push things out quickly and so a lot of software is really just a Lovecraftian monster made of spaghetti and duct tape. And for what? Like Apple released the M4 today and who's going to use that power? Why did it take years for Apple to develop a fucking PDF reader that I can edit documents in? Why is it still a pain to open a PDF on my macbook and edit it on my iPad? Constantly fails and is unreliable, disconnecting despite being <2ft from one another. Why can't I use an iPad Pro as my glorified SSH machine? fuck man, that's why I have a laptop, so I can login to another machine and code there. The other things I need are latex, word, and a browser. I know I'm ranting a bit but I just feel like we in computer science have really lost this hacker mentality that was what made the field so great in the first place (and what brought about so many innovations). It just feels like there's too much momentum now and no one is __allowed__ to innovate.

To bring it back to interviewing signals, I do think the rant kinda relates. Because this same degradation makes it harder to determine in groups when there's so much pressure to be a textbook. But I guess this is why so many ML enthusiasts compare LLMs to humans, because we want humans to be machines.


Many software programs fail to achieve ultimate efficiency either because the software engineers are unable to do so, or because external factors prevent them from achieving it. I believe that in most cases, it is the latter.


I'd like to think the later because it makes us look better but I've seen how a lot of people code... I mean GPT doesn't just produce shit code because it can't reason... It'll only ever be as good as the data it was trained on. I teach and boy can I tell you that people do not sit down and take the time to learn. I guess this is inevitable when there's so much money. But this makes the interview easy, since passion is clear. I can take someone passionate and make them better than me but I can't make someone in it for the money even okay. You're hiring someone long term, so I'd rather someone that's always going to grow rather than someone who will stay static, even if the former is initially worse.

IME the most underrated optimization tool is the delete command. People don't realize that it's something you should frequently do. Delete a function, file, or even a code base. Some things just need to be rewritten. Hell, most things I write are written several times. You do it for an essay or any writing, why is code different?

Yeah, we have "move fast and break things" but we also have "clean up, everybody do their share." If your manager is pushing you around, ignore them. Manage your manager. You clean your room don't you? If most people's code was a house it'd be infested with termites and mold. It's not healthy. It wants to die. Stop trying to resuscitate it and let it die. Give birth to something new and more beautiful.

In part I think managers are to blame because they don't have a good understanding but also engineers are to blame for enabling the behavior and not managing your managers (you need each other, but they need you more).

I'll even note that we jump into huge code bases all the time, especially when starting out. Rewriting is a great way to learn that code! (Be careful pushing upstream though and make sure you communicate!!!) Even if you never push it's often faster in the long run. Sure, you can duct tape shit together but patch work is patch work, not a long term solution (or even moderate).

And dear God, open source developers, take your issues seriously. I know there's a lot of dumb ones, but a lot of people are trying to help and wanting to contribute. Every issue isn't a mark of failure, it's a mark of success because people are using your work. If they're having a hard time understanding the documentation, that's okay, your docs can be improved. If they want to do something your program can't, that's okay and you can admit that and even ask for help (don't fucking tell them it does and move on. No one's code is perfect, and your ego is getting in the way of your ego. You think you're so smart you're preventing yourself from proving how smart you are or getting smarter!). Close stale likely resolved issues (with a message like "reopen if you still have issues") but dear god, don't just respond and close an issue right away. Your users aren't door to door salesmen or Jehovah's Witnesses. A little kindness goes a long way.


> And the great irony is that most software is slow as shit and resource intensive

You really need those 100x faster algorithms when everything is a web or Electron app.


I’d add another factor to #1: this feels objective and unbiased. That’s at least partially true compared with other approaches like the nebulous “culture fit” but that impression is at least in part a blind spot because the people working there are almost certainly the type of people who do well with that style and it can be hard to recognize that other people are uncomfortable with something you find natural.


I would say that it makes the interview process more consistent and documented, and less subject to individual bias. However there's definitely going to be some bias at the institutional level considering that some people are just not good at certain types of interview questions. Algorithm and data structures questions favor people who recently graduated or are good at studying. Behavioral interviews favor people who are good at telling stories. Etc.


Yes, to be clear I’m not saying it’s terrible - only that it’s not as objective as people who like it tend to think. In addition to the bias you mentioned, the big skew is that it selects for people who do well on that kind of question in an unusual environment under stress, which is rarely representative of the actual job. That’s survivable for Google – although their lost decade suggests they shouldn’t be happy with it – but it can be really bad for smaller companies without their inertia.


Yeah I buy this theory.

The problem I have with it is that for this to be a reasonably effective strategy you should change the arbitrary metric every few years because otherwise it is likely to be hacked and has the potential to turn into a negative signal rather than positive. Essentially your false positives can dominate by "studying to the test" rather than "studying".

I'd say the same is true for college admissions too... because let's be honest, I highly doubt a randomly selected high school student is going to be significantly more or less successful than the current process. I'd imagine the simple act of applying is a strong enough natural filter to make this hypothesis much stronger (in practice, but see my prior argument)

People (and machines) are just fucking good at metric hacking. We're all familiar with Goodhart's Law, right?


I think (but cannot prove) that along the way, it was decided to explicitly measure ability to 'study to the test'. My theory goes that certain trendsetting companies decided that ability to 'grind at arbitrary technical thing' measures on-job adaptability. And then many other companies followed suit as a cargo cult thing.

If it were otherwise, and those trendsetting companies actually believed LeetCode tested programming ability, then why isn't LeetCode used in ongoing employee evaluation? Surely the skill of programming ability a) varies over an employee's tenure at a firm and b) is a strong predictor of employee impact over the near term. So I surmise that such companies don't believe this, and that therefore LeetCode serves some other purpose, in some semi-deliberate way.


I do code interviews because most candidates cannot declare a class or variable in a programming language of their choice.

I give a very basic business problem with no connection to any useful algorithm, and explicitly state that there are no gotchyas: we know all inputs and here’s what they are.

Almost everyone fails this interview, because somehow there are a lot of smooth tech talkers who couldn’t program to save their lives.


I think I have a much lazier explanation. Leet code style questions were a good way to test expertise in the past. But the same time everyone starts to follow suit the test becomes ineffective. What's the saying? When everyone is talking about a stock, it's time to sell. Same thing.


> If it were otherwise, and those trendsetting companies actually believed LeetCode tested programming ability, then why isn't LeetCode used in ongoing employee evaluation?

Probably recent job performance is a stronger predictor of near future job performance.


so having done interviews, just because the latter may be more present, does not mean the hordes of people just throwing a spaghetti made-up-resume at the wall have gone away. our industry has a great strength in that you don't need official credentialing to show that you can do something. at the same time, it is hard to verify what people are saying in their resumes, they might be lying in the worst case but sometimes they legitimately think they are at the level they are interviewing for. it was bad before the interest rate hikes, i cannot imagine what the situation is like now that hiring has significantly slowed and a lot more people are fighting for fewer jobs.

i did interviews for senior engineer and had people fail to find the second biggest number in a list, in a programming language of their own choosing. it had a depressingly high failure rate.


I had a candidate claiming over ten years of experience who couldn’t sum an array of ints in any language of his choosing.

This wasn’t an off-by-one or didn’t handle overflow, but rather was couldn’t get started at all.


Ten years of experience at one of those places where every keystroke outside powerpoint is offshored. Why would they know how to sum ints? Some people do start their careers as what could best be described as software architecture assistants. They never touched a brick in their lives, to go with the architecture image.


I have junior and senior students that struggle with fizzbuzz... But damn, are they not allowed to even do a lazy inefficient `sort(mylist)[-2]` if they forgot about for loops? That's the most efficient in terms of number of characters, right haha

But I still think you can reasonably weed these people out without these whiteboard problems. For exactly the same reasons engineers and scientists can. And let's be honest, for the most part, your resume should really be GitHub. I know so much more about a person by walking through their GitHub than by their resume.


Using GitHub is discriminatory against people who don’t code on the weekends outside of their jobs, and most people’s job related code would be under NDA and not postable on Github.

To be a capital E Engineer you have to pass a licensing exam. This filter obviously is not going to catch everything but it does raise the bar a little bit.

—-

As far as the root question goes, they are allowed to propose that, and then i can try and tease out of them why they think that is the best and if something is better. But you would be surprised at the creative ways people manage to not iterate through a full loop once.


You're right. But a lot of people that are good coders code for fun. But you're also right that not all those people push their code into public repositories. The same is true for mechanical engineers. They're almost always makers. Fixing stuff at home or doing projects for fun. Not always, but there's a strong correlation.

But getting people to explain projects they did and challenges they faced can still be done. We do it with people who have worked on classified stuff all the time. If you're an expert it's hard for people to bullshit you about your domain expertise. Leet code is no different. It doesn't test if you really know the stuff, it tests how well you can memorize and do work that is marginally beneficial in order to make your boss happy. Maybe that's what you want. But it won't get you the best engineers.


Leet code, in the interviews that I do, is not the only thing I do.

But when I am asked to do a one hour part of an interview for four interview loops a week, all the preps and debriefings, and also do all my normal day-to-day deliverables, we need some fast filters for the obvious bullshitters. The interviewing volume is very high and there is a lot of noise.


Hiring people who code at all is discrimination against people who played video games instead of learning to code.


“Codes on their spare time” is not part of the job description, but “codes at all” is.

There are plenty of reasons not to code on spare time. If anything the people who are most likely to do that are often also the people who coding interviews are supposed to be privileging, fresh single college grads.

I don’t know how people would square the statements “take-home assignments are unpaid labor and unfair to people with time commitments” and then do a 180 and say “people should have an up-to-date fresh github that they work on in their spare time.”


If it would take the candidate "spending every waking moment of their lives coding" to have one or two small coding projects after a half decade plus in the field, that's a signal.

If you went to college but never made anything, that's a signal.

If you didn't go to college, and never made anything, just shut up and make something.


In a half decade plus some people pick up other commitments that are not side projects, like a pet, a child, sports, hiking, etc.

At the end of the day, it isn’t really relevant to the employer what is done in spare time off the job when they get hired, so it’s not like I should privilege their spare time projects over people who don’t do that, particularly if people don’t want to code off the clock. There are plenty of good engineers who do not code off the clock, and there are plenty of bad engineers who do.

Also, more often than not, coding off the clock for the sake of having a portfolio is not really indicative of anything. There aren’t, for example, review processes or anything like that in a one person side project, and unless I spend significantly more hours background checking who’s to say the side project is not plagiarized? People already lie on their resumes today.


In the time you took writing this comment you could've gotten the repo created and the title and description filled out. Writing text in a public readme.md would serve you better than sending it to me.


I have side projects, but I don't expect every candidate to, nor do I expect every candidate to be a religious reader of every HN comment thread.


I'm not saying it should be mandatory, but they would have to show mastery some other way. Whiteboard? Live coding? Project?

I think a side project opens up the opportunity to skip that for a project presentation. This is a lot more in line with real life as you would typically code and later present that work to others. You would defend it to some degree, why you made choice A vs choice B. If you created it, you'll be able to do that.

Doesn't need to be a huge thing. Just show you can do anything at all really at the junior level. Intermediates can show mastery in a framework with something with slightly more complexity than a "hello world".


> I'm not saying it should be mandatory, but they would have to show mastery some other way. Whiteboard? Live coding? Project?

godelski said your resume should really be GitHub. You could have said this instead of sarcasm.


Typically the people without side projects also make excuses to not do those either.

If I had a company I'd offer looking over an existing project or a project where you create a side project of your choice without any further direction.

So not mandatory but the easiest way to go probably. Once you apply to my company you'll have one to show for next time at least.

(If you want to write the project out on the whiteboard instead I guess feel free, that seems hard though.)


Many people do not have side projects. Few people working as software engineers were not tested in some way.

I think it's more useful and more fair to give candidates some direction when I request something. What scope is enough? What tests are enough? We define side project differently or you would expect much more time from candidates than I do.


> What scope is enough? What tests are enough?

How each candidate answers each question themselves tells you about them.


I used to think so. But real tasks have acceptance criteria. Seeing how candidates work with loose criteria has told me more than telling them in effect to read my mind.


It's so clear that people on this site have no friends, family, or hobbies. I also don't think you realize how ableist your comment is.

Some people have more trouble completing tasks for reasons that have nothing to do with talent or intelligence.


this meaning no 1) is the right answer




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: