> There's four excellent examples of things that were better 10 years ago before this practice was instituted.
FWIW, I interviewed with Google in 2009 and the interview was 100% pure leetcode (not even design/architecture questions). It was exactly a series of four back-to-back 1-on-1 meetings, which consisted of "Hello" followed by an algorithmic puzzle.
If anything they seem to have broadened the scope of interviews since then.
Alright. I stand corrected. I also went to San Francisco job interviews around 2005 or so and found that. I was turned off by them back then as well.
I'm not a researcher in the field but I'd love to see some study. Let's say measuring for deadline slippage, budget overshot, and retention rate.
The testing groups will be teams that use these tactics almost exclusively and those that use a variety of other methods.
If my assumption that this is a mostly arbitrary attribute is correct, their averages would be nearly the same.
If they produce statistically better results then I'd literally quit my job and go work for one. I'd be happy to be wrong about this but I don't think I am
FWIW, I interviewed with Google in 2009 and the interview was 100% pure leetcode (not even design/architecture questions). It was exactly a series of four back-to-back 1-on-1 meetings, which consisted of "Hello" followed by an algorithmic puzzle.
If anything they seem to have broadened the scope of interviews since then.