Happy to see this. The push to RTO seems to just be a front for legal constructive dismissal of expensive employees. These jobs are immediately being backfilled by remote employees on the otherside of the world. The FTE to third-party ratio is growing exponentially, it's becoming way more common to see things like a 1:5+ fte to 3p ratio.
There are two established companies that I know of, Chinese Co. NetDragon Websoft (5k employees) has an AI CEO named Tang Yu and then there is a polish rum company called Dictador that did something similar. We'll see where it goes I guess.
Can anyone on the hiring side chime in on what you're seeing from applicants? Every job posting I see has 100+ candidates, I assume most are either international due to remote option or are part of this spray and pray phenomena. Is it obvious they're unqualified or is it actually difficult to separate the signal from the noise?
Are these models high risk because of their lack of interpratability? Specialized models like temporal fusion transformers attempt to solve this but in practice I'm seeing folks torn apart when defending transformers against model risk committees within organizations that are mature enough to have them.
Interpretability is just one pillar to satisfy in AI governance. You have build submodels to assist with interpreting black box main prediction models.
Azure OpenAI still flags prompts that trigger their censorship rules for human review. The only way around that is to request a waiver which is generally reserved for regulated industries. That may have changed (hope so!) but at the time I know it was a non-starter for some privacy-minded clients.
What would be a good design pattern to put these dashboards behind auth? I suppose since they're static files you could just serve them with something like FastAPI or Spring Boot and have your CI/CD refresh the static files throughout the day on shared storage?
If you're putting these behind a reverse proxy (nginx, etc.) you can just setup client certificate authentication by using your own locally generated CA or by using something like Vault for UI-based certificate generation. When you visit this site with a certificate installed on your device, it will authenticate successfully, and for those who do not have a correct certificate installed, a "No certificate presented" error will be shown.
> When interviewing as a software engineer, for example, you’ll run into places that lean heavily on algorithms interviews.
And don’t get me wrong, those interviews can be fun (in the same way that Project Euler can be fun), but it’s not very relevant for most jobs
I'm curious what changed around 2015-2016 that led to the current interview process. In the before times the interview was more of a technical conversation, sure they had some gotcha questions, but nothing too brutal. If you had real experience that you could talk about, maybe it was validated against references or a background check. People usually looked for a CS or adjacent degree, and that was enough. The quality of folks wasn't too different in my experience.
I really feel for those with a lot of experience that are being dropped into the fire, especially those with families who may not have the time to memorize patterns for 3 months, but are still very capable. Non-tech, low paying jobs now ask you to create a soduko solver or trap rain water within 45 minutes. There's not many places to go for those that don't 'adapt'.
1. Interviewer: If you're a good software engineer, you can answer basic algorithmic questions.
2. Interviewees: Practice algorithmic questions so you appear to be a good software engineer.
3. Interviewer: People are just studying leetcode to get jobs, what can we do? Ask harder leetcode questions.
4. Other companies: Let's copy them since they're successful.
In short, the questions used to be reasonable until people specifically prepared for them. No one knew what to do about it so they just raised the difficulty, which made it even more unfair for people who don't specifically prep.
FizzBuzz as an example of 2/3: people used to occasionally talk about how interviewees were just memorizing the answer, and when they tweaked it slightly (like adding a 3rd number), a bunch of them could no longer solve it.
“if a flag of truth were raised we could watch every liar rise to wave it”
I heard this lyric at a formative time, and I’ve seen it be proven true many times. Including tech interviews. People continually seek out those signals that imply knowledge and experience and even shared culture, but those signals inevitably become too small (smaller = quicker and easier to weed people out) and then they become the very things that people practice in order to look like they have the knowledge, experience, or shared culture they need in order to get through the doors and secure the opportunities.
Then those signals get burned and the cycle starts again (in fact, in my experience the cycles concurrently overlap).
It's actually great on the hiring side — you can skip all that bullshit and because tryhards are all prepping obscure CS questions just having a conversation about technical topics has become signal again. Measure something people aren't trying to game and you get a better assessment, go figure.
Yes, I explicitly do the opposite and have the most pragmatic exercises, questions.
Another thing is an exercise for system design where hyperscaling is not required and the thing is actually quite simple. Many who have specifically prepared by leetcoding and reading "cracking the coding interview" 10 times over will naturally overengineer everything trying to fit this exercise to those book patterns dropping all common sense, all the while not having actually built anything meaningful.
I think these people will mostly try to rest and vest anyway. Truly passionate people will pass since they have actually built something and will understand the exercise.
When I get a system design question I always tell the interviewer "I'd just run it on a single server with an SQLite backend, that will be plenty for the median software service and you haven't told me any numbers that suggests this needs more" and then it turns out they wanted it to run at the scale of WhatsApp.
For what it's worth if I ever had a candidate give me this answer on a systems design problem I'd probably immediately stop evaluating them and start selling them on the role.
Does 99.995% [1] of hackernews sound reasonable enough to you?
The reality is a lot of systems (especially simple ones) run perfectly fine on a single server with next to no downtime and all the additional redundancies we introduce also add additional points of failures and without the scale that makes these necessary you might actually end up reducing your availability.
> Does 99.995% [1] of hackernews sound reasonable enough to you?
How did you come up with that number? I looked at the link and just one of the outages listed on January 10 was 59 minutes. That alone makes the uptime worse than 99.99% for the entire year before it was halfway through January.
That’s a pretty low level. That’s lower than my pi hole. But I wouldn’t consider my pihole to be anything other than best endeavours (99.1%). Two would be fine, but there are common points of failure which would limit the solution
The second one is just a standby though and not in another region. And if I recall correctly dang mentioned at an outage that the failover to the standby is manual. But I'm not sure.
We've run 375,000 self-service employees ERP system (so much heavier single transaction than HN) on a single (large:) db2/aix box with no downtime over last 8 years. That's well within published specs for that hardware / software combo.
Yes we do have a DR box in another data centre now, in case of meteorite or fire.
This used to be the norm. A single hardware / software box CAN be engineered for high uptime. It's perfectly fine when we choose to go the other way and distribute software over number of cheap boxes with meh software, but I get pet-peeved when people assume that's the ONLY way :).
The highest correlation I see to success in my field is a background of PC gaming. They tend to do better on interviews in the technical regard, all the flashy certificates go out the window of you cant tell me what you would do if a computer wont post.
absolubtely! One of the best jobs I ever worked at asked me what CPU I had in my computer at home, I talked about how i'd built my own pc and the parts that went into it and why, and they (much) later when I was settled at the role said they could see I was a good candidate from that point onward. I think its useful to show a curiosity about your tools, about the boundaries of your world, which is useful when things arent going to plan
Do you really think it made you a better candidate, or did this person just feel a connection due to a similarity in interests or in your approach to computing? Was it even relevant for the job?
I find basic pc troubleshooting skills to be highly relevant to working in a data center. I learned about isolation testing and minimum config when I was like 13 trying to play MMOs, these things come relatively naturally to me due to exposure at a young age. That interest has prevailed well into my adulthood now, having no formal education I run circles around the relatively disinterested comptia kids.
As to being a better candidate, there is little I doubt less* than simple observations I can make at work. We could split hairs over causality, but there is a clear distinction between the people who go home to delid their CPU and the people who have devoted their time to certs instead.
Of course there will always be those sages that dont really care much for videogames and have transcended the street knowledge, those people 1. Have better jobs to work 2. Are harder to find with little benefit.
> due to a similarity in interests or in your approach to computing?
It's hard to say for sure - both of these things are also part of being a good candidate and working well in a team.
But I do think that this experience is both because of and forms part of who I am, a certain troubleshooting & optimizing mindset and curiosity with machines. Is it strictly necessary, or will all people with this shared hobby/past be this way? I don't know. I do think its a useful or good fuzzy signal, to be best used alongside other signals.
Was it relevant for the job? Not for the job description. There were times when I used related experience to solve problems or smooth things over though, such as figuring out why a QA engineer's setup was bluescreening (faulty ram), or in having familiarity with tools built into windows for performance profiling and debugging memory & storage problems with programs
Sadly, we can’t defer to stack overflow for interview success like we can with code. GPT will/may help break it up but until we stop with the sociology questions and get back to technical delivery, we’ll continue to see people try to game a system and then try to game that system they gamed. It’s real life NPM.
What changed? Well it was about 2010, The industry became a target for very low quality people to do whatever they could to land the job by faking and lying. All of these annoying interview tricks are just a way to try to filter out all the people (90% of them now) who taught themselves how to talk the talk but couldn’t program themselves out of a wet paper bag. And yes, there are ways to figure it out but it’s really hard given the current volume, and having your engineers stop and interview constantly kills productivity when you have to interview 20 people and only one can even count words in a file.
The function name is pretty misleading, since it looks like a constructor. Even if it's not a constructor, the functionality is pretty far away from the name you've chosen, so it's not a compelling API design either way. How would others use this function? How is this name self-documenting? The decision here is lacking consideration on all fronts.
Also, while I don't immediately recognize the exact language, the style looks off too. The function wetpaperbag is declared in all lowercase, but inside it's calling a function Quit that's capitalized? And, whether this particular language is camel-casing or snake-casing, it's clear that "wetpaperbag" isn't going to pass linting either way.
Overall, strong no hire. Next.
(/s... but only mostly, since the GP's point is that you actually can get a lot out of just asking simple interview questions, which I think we've helpfully proven out here)
Even just giving Leetcode questions isn’t enough anymore…
I’ve had plenty of people just memorize the top 50 used questions and be able to answer them, but then not be able to explain anything about their answer. It’s exhausting.
Explaining the reasoning while solving the code puzzle is the most important part, though. Just being able to solve them without speaking a word has never been enough. Unless you are talking about automated coding tests (but that also has never been enough).
It's super easy to filter for that though. Just change the problem slightly. If they give the rote answer you know what you need to know. It does require that you know the answer though. :)
> I'm curious what changed around 2015-2016 that led to the current interview process. In the before times the interview was more of a technical conversation, sure they had some gotcha questions, but nothing too brutal. If you had real experience that you co...
Places just copied big tech companies that were already doing that for many years prior to 2015. I had leetcode style interviews well before 2010 even.
Part of the problem is people just memorize things for interviews. We interviewed three candidates that were almost exact copies of each other in their responses. Maybe they all read the same interview book?
I also witnessed someone give a completely wrong (algorithm) whiteboard answer to a problem. Obviously they weren't thinking about the problem as they did it.
Methinks it’s a good enough method to somehow get good enough candidates. Hiring is hard because it’s a negotiation: sellers always upsell their skills and buyers gotta figure it out. Negotiation is skill that not everyone has so at least by having some sort of standard test like the current leetcode system, you can guarantee the dudes coming in know the difference between a Stack and Queue
After subprime interest rates went to or near zero, prompting a huge influx of cash available for VCs with heightened FOMO due to increasing complexity of software. Marginal quality of talent decreased while remuneration went through the roof, drowning out valid signals in cvs and interview processes with so much noise that any complicated enough system was seen as valid. Extremely high rate of failure, which was made acceptable due to the potential rewards of funding a unicorn early further degraded the ability of anyone involved to conduct clear assessments.
Big Tech could afford to do that due to their aura of mysticism and greatness, but if you're a mom and pop shop with Google-esque aspirations but none of the compensation and status, I wonder how many candidates have the patience and interest to go through the hoops.
It all started in a few years earlier. Google had determined that there was a strong correlation between understanding fundamental computer science and being a strong software engineer, so they began testing for computer science fundamentals. At the time lots of companies were trying to mimic Google's success and it quickly became an industry standard. People began just studying how to answer these questions. The population of people that are good at these types of interviews shifted from hardcore detail oriented programmers with academic backgrounds to mostly people that spend time practicing leetcode style questions.
One of their hiring criteria is cognitive ability. If it correlates with strong proficiency in CS algorithms, then this might have been why they focus on it when hiring engineers.
(I am not saying that this strong correlation exists.)
The correlation doesn't exist. A large segment of the Leetcode-uber-alles practitioners are also the ones that are obsessed with total compensation as the ultimate measure. It was a mistake by google and others to focus purely on this , but we are where we are. Many of the leetcode fanatics can't deal with real world issues in software or operations.
I'm grateful for leetcode. Despite my background in electrical engineering, where I specialize in statistical signal processing, I never had the opportunity to delve into algorithm or data structure courses during my college years.
In my field, understanding signals and systems, probability, optimization, and numerical computing is very important.
Leetcode, in comparison, offers a more confined scope, making preparation more manageable and systematic, ultimately helping me break into software engineering.
We might also ask why EE specialists (and indeed half of STEM) needs to fall back to software engineering to get meaningful and stable employment, but capitalism probably does not like us asking these pesky questions.
It's not that EE *needs" to fall back on software for stable employment, it's that software pays more, and ridiculously more at big tech companies.
I have both EE and CS degrees, but I make literally more than double in CS-type jobs than I would back in EE. Even though back in the day, I was definitely better at EE stuff than CS stuff.
This was true very early career, and for now at least continues to be true.
---
I'm still of the opinion that software will eventually revert to the mean, comparable to other engineering compensation... some year. It probably won't happen until there's significant regulatory and liability burden that doesn't really exist in software yet. Like some in software complain about GDPR or DMA or whatever, but crank that up orders of magnitude and then add career-and-company-ending lawsuit threats on top, and then you have real engineering. Then it becomes much more expensive to do things, profit margins go way down, and pay does as well.
Who knows when that'll actually happen. For now, software is bonkers profitable.
(When software engineering becomes real engineering, I posit that it will also cease to be as highly paid)
I love hardware and EE, but most of my friends that do it hate it and work on power lines only. I wish hardware was as good, but it's not as in demand and it's harder too!
"capitalism" Is it in the room with you? Do you need assistance? Did the evil capitalism touch you in your naughty place?
But seriously... everything uses computers and programming ties everything together. I don't think you have to be a grand master programmer but having those skills make you a better employee.
The real pesky question is why do you think socialism, communism or the alternatives to "capitalism" wouldn't benefit from having people able to use better tools? Productivity multipliers would help "The People" and The Masters in charge of the authoritarian dictatorship alternatives to "capitalism"...
If you don't have the background, to me you would be a risky bet since you may be missing some fundamentals. But I'm not sure what niche you're specializing in, or if you could provide some social proof that you've otherwise picked up the requisite software engineering skills.
I recently started interviewing again and haven't encountered many leetcode style questions this time around. I'm noticing lots of contrived questions that relate to some feature the company had to build.
Yeah, the kind of question that I'm not comfortable giving a vague nonanswer to because I'm not familiar with all the intricacies they spent months ironing out. Which then turns into "what do you want me to say" whackamole of endless questions to gain the context needed to actually answer their question. Usually asked by people who cannot provide the context that actually led to the answer but still believe they understand why it was chosen.
It's a pretty specific scenario but it's happened to me
If the question is sufficiently related to the actual application, it could be an interview of the "solve our biggest problems for us, unpaid, without context" form, which I experienced once at a tiny startup. The technical lead just sat there silently as I probed with more questions. Eventually, I set down a boundary and let them know that that is something for the business to solve.
Maybe contrived wasn’t the best word, but I’ve been asked to implement a simplified feature of the company’s product. So at one company I was asked to create a simplified model for their product with support for undo/redo.
No they definitely aren't using my code in any way. During the interview they are trying to see how I approach building something they have already completed.
I just wish that if interviewing was going to become “random irrelevant stuff from second or third year CS” that we had gotten automata theory, because that was actually fun.
I recall being on a software engineer interview loop (faang), my topic to interview for was around practical problem solving. The candidate performed poorly, unable to write any solution to a basic log parsing problem in the language if their choice. In the debrief, apparently the candidate did very well in the algorithmic interview part of the loop. I dissented on the decision to hire, because of their inability to solve basic real world problems, the group proceeded anyway..
I had an interview recently where I was given a 1000 line, deeply stacked conditional and asked to traverse it. Like, what are they selecting for? I hope that’s not demonstrative of their source code.
You could always learn the solutions once and put them into a set of Anki flashcards, so that you occasionally get reminded to practice them just before you forget them. You really don't have to keep repeating the 3 months "from scratch" experience. That's what I have done, pretty badly to be frank, and the prep time is now closer to 2 weeks.
Compared to many other industries where people are required to go get expensive and time consuming "continuing education", I don't think keeping a timed flashcard deck of maybe a couple hundred items total in rotation is all that onerous.
I have my war stories too. It was fun but frustrating. Definitely not as productive as today.
Early 2000s internet was kind of like that too. The web was huge relative to what we had to work with and there wasn't the kind of products/services available then so asking leetcode stuff had a logic to it.
Admittedly, that's not where we are now for most programmers so it makes sense that the interview process should adapt.
Even if I enjoy leetcoding and I do, there are tons of things I would rather do before that, and that are far more useful, such as building and working on side projects.
So to me leetcoding is a waste of time and less fun than actually building something, having other people use what you have built. And there is infinite things to build there and learn.
I get that. I only do medium and easy problems for a reason. Hard problems are hard and I want to solve it over morning coffee. :)
Then again, I'm sure all of my leisure activities are a waste of time on someone's metric.
I probably learn more from the comments than the problems though. People post lots of interesting idioms and it's interesting to me to see how others codify various standard algorithms. Especially across languages.
Beyond the risk associated with using a ccTLD, I've noticed that several Fortune 100 companies are now outright blocking .ai domains because they host content believed to potentially leak intellectual property, especially within the financial services industry. This is something to consider if you're launching a new product targeting such customers and want to avoid having architects/engineers go through the hassle of requesting that your site be added to an allowlist.
Is modern Ceph appropriate for transactional database storage, how is the IO latency? I'd like to move to a cheaper cfs that can compete with systems like Oracle's clustered file system or DBs backed by something like Veritas. Veritas supports multi-petabyte DBs and I haven't seen much outside of it or ocfs that similarly scales with acceptable latency
Not sure about putting DBs on CephFS directly, but Ceph RBD can definitely run RDBMS workloads.
You need to pay attention to the kind of hardware you use, but you can definitely get Ceph down to 0.5-0.6 ms latency on block workloads doing single thread, single queue, sync 4K writes.
Source, I run Ceph at work doing pretty much this.
It is important to specify which kind of latency percentile this is. Checking on a customer's cluster (made from 336 SATA SSDs in 15 servers, so not the best one in the world):
50th percentile = 1.75 ms
90th percentile = 3.15 ms
99th percentile = 9.54 ms
That's with 700 MB/s of reads and 200 MB/s of writes, or approximately 7000 reads IOPS and 9000 writes IOPS.
These numbers may be good enough for your use case but from what’s possible with SSDs these numbers aren’t great. Please note, I mean well. Still a cool setup.
I’d like to see much more latency consistency and 99th even sub ms. Might want to set a latency target with fio and see what kind of load is possible until 99 hits 1ms.
However, I can say all of this but it’s all about context and depending on workload your figures may be totally fine.
From my dated experience, Ceph is absolutely amazing but latency is indeed a relative weak spot.
Everything has a trade-off and for Ceph you get a ton of capability but latency is such a trade-off. Databases - depending on requirements - may be better off on regular NVMe and not on Ceph.
No, it's important when planning - eg: one big database cluster that provides db-as-a-service (but maybe needs some dedicated ops resources) vs smaller DBs with virtualized storage on ceph (ops resources for ceph cluster and vm tools like k8s).
If the latter is too slow for your typical usage...
Oh, don't get me wrong, you will pay a price for disaggregated highly available storage, and you might need to evaluate whether you want to pay that price or not. But those are two very different worlds, and only one of them gives you elastic disk size, replication, scale-out throughput, and so on.
GP makes Ceph sounds worse than it is, when reality is that just shoving all your reads & writes over the network, writes multiple times because of replication, is gonna cost you no matter what tech you build that with.
One of my first projects was to interface with an IMS system using Python of all things, there was only one guy that really knew the system and I had to sit with them for hours to even begin wrapping my head around how a hierarchical DBMS worked, how space is managed, etc. I remember the first comment he made was that the database was created for use as an inventory system for the Apollo missions. The DB I had to work with was created in the 80's. This project was in 2021, so it's still out there, supporting critical infrastructure.