Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

After that 35% savings, they ended up saving about a US mid level engineer's salary, sans benefits. Hope the time needed for the migration was worth it.


I broke the rules and read the article first:

> In the context of AWS, the expenses associated with employing AWS administrators often exceed those of Linux on-premises server administrators. This represents an additional cost-saving benefit when shifting to bare metal. With today’s servers being both efficient and reliable, the need for “management” has significantly decreased.

I also never seen an eng org where substantial part of it didn’t do useless projects that never amount to anything


I get the point that they tried to make, but this comparison between "AWS administrators" and "Linux on-premises server administrators" is beyond apple-and-oranges and is actually completely meaningless.

A team does not use AWS because it provides compute. AWS, even when using barebonea EC2 instances, actually means on-demand provisioning of computational resources with the help of infrastructure-as-code services. A random developer logs into his AWS console, clicks a few buttons, and he's already running a fully instrumented service with logging and metrics a click away. He can click another button and delete/shut down everything. He can click on a button again and deploy the same application in multiple continents with static files provided through a global CDN, deployed with a dedicated pipeline. He clicks on another button again and everything is shut down again.

How do you pull that off with "Linux on-premises server administrators"? You don't.

At most, you can get your Linux server administrators to manage their hardware with something like OpenStack, but they would be playing the role of the AWS engineers that your "AWS administrators" don't even know exist. However, anyone who works with AWS only works on the abstraction layers above that which a "Linux on premises administrator" works on.


This is the voice of someone who has never actually ended up with a big AWS estate.

You don't click to start and stop. You start with someone negotiating credits and reserved instance costs with AWS. Then you have to keep up with spending commitments. Sometimes clicking stop will cost you more than leaving shit running.

It gets to the point where $50k a month is indistinguishable from the noise floor of spending.


> This is the voice of someone who has never actually ended up with a big AWS estate.

I worked on a web application that provided by a FANG-like global corporation that is a household name and used by millions of users every day, and which can and did made the news rounds if it experiences issues. It is a high-availability multi-region deployment spread about a dozen independent AWS accounts and managed around the clock by multiple teams.

Please tell me more how I "never actually ended up with a big AWS estate."

I love how people like you try to shoot down arguments with appeals to authority when you are this clueless about the topic and are this oblivious regarding everyone else's experience.


Hrm. I have worked for a global corporation that you have almost certainly heard of. Though it's not super sexy.

The parent you're replying to resonates with me. A lot of politics about how you spend and how you commit, it's almost as bad as the commitment terms for bare-metal providers (3,6,12,24month commits). Except the base-load is more expensive.

It depends a lot on your load, but for my workloads (which, is fragile dumb but very vertical compute with a wide geographic dispersion), the cost is so high that a few dozen thousand has been missed numerous times, despite having in-house "fin-ops" folks casting their gaze upon our spend.


Hey me too.


Going to be honest: If your AWS spend is well over 6 figures and you’re still click-ops-ing most things you’re:

1) not as reliable as you think you are 2) probably wasting gobs of money somewhere


From the parents posters comments, the developers could very well be putting together quick proofs of concepts.

I’ve set up an “RnD” account where developers can go wild and click ops away. I also set up a separate “development” account where they can test thier IAC manually and then commit it and it gets tested through a CI/CD pipeline. Then after that it goes through the standard pull request/review process.


> A random developer logs into his AWS console, clicks a few buttons, and he's already running a fully instrumented service with logging and metrics a click away

In a dream. In the real world of medium-to-large enterprise, a developer opens a ticket or uses some custom-built tool to bootstrap a new service, after writing a design doc and maybe going through a security review. They wait for the necessary approvals while they prepare the internal observability tools, and find out that there is an ongoing migration and their stack is not fully supported yet. In the meantime, he needs permissions to edit the Terraform files to update routing rules and actually send traffic to their service. At no point he does, or ever will, have direct access to the AWS console. The tools mentioned are the full-time job of dozens of other engineers (and PMs, EMs and managers). This process takes days to weeks to complete.


> A random developer logs into his AWS console, clicks a few buttons, and he's already running a fully instrumented service with logging and metrics a click away...

This only works that way for very small spend orgs that haven’t implemented soc 2 or the like. If that’s what you’re doing then probably should stay away from datacenter, sure


> This only works that way for very small spend orgs that (...)

No, not really. That's how basically all services deployed to AWS work once you get the relevant CloudFormation/CDK bits lined up. I've worked on applications designed with high-availability in mind, which included multi-region deployments, which I could deploy as sandboxed applications on personal AWS accounts in a matter of a couple of minutes.

What exactly are you doing horribly wrong to think that architecting services the right way is something that only "small spend orgs" would know how to do?


Your original comment gives an impression that you like AWS bc anyone can click-ops themselves a stack so that's why you got all these clickops comments.

How is an army of "devops" implementing your CF/CDK stack any different from an army of (lower paid) sysadmins running proxmox/openstack/k8s/etc on your hw?


> Your original comment gives an impression that you like AWS (...)

My comment is really not about AWS. It's about the apples-to-oranges comparison between the job of "Linux on-premises server administrator" and value-added of managing on-premises servers, and the role of "AWS administrator". Someone needs to be completely clueless to the realities of both job roles to assume they deliver the same value. They don't.

Someone with access to any of the cloud provider services on the market is able to whip out and scale up whole web applications with far more flexibility and speed than any conceivable on-premises setup managed with the same budget. This is not up for debate.

> How is an army of "devops" implementing your CF/CDK stack any different from an army of (lower paid) sysadmins running proxmox/openstack/k8s/etc on your hw?

Think about it for a second. With the exact same budget, how do you pull off a multi-region deployment with an on-premises setup managed by your on-premises linux admins? And even if your goal is providing a single deployment, how flexible are you to put up this scheme to test a prototype and afterwards shut down the service?


> Someone with access to any of the cloud provider services on the market is able to whip out and scale up whole web applications with far more flexibility and speed than any conceivable on-premises setup managed with the same budget.

Bullshit. I've seen people spin wheels for months/years deploying their cloud native jank and you should read the article - it's not nearly the same budget.

> Think about it for a second. With the exact same budget, how do you pull off a multi-region deployment with an on-premises setup managed by your on-premises linux admins?

You do realize things like site interconnect exist right? And it likely will be cheaper than paying your cloud inter-region transfer fees. You're going to be testing multi-regional prototype? please

Look there's a very simple reason why folks have been chasing public clouds and it has nothing to do their marketing spiel of elastic compute, increased velocity, etc. That reason is simple - teams get control of their spend without having to ask anyone for permission (like the old-school infra team).


You just log into the server...

Not everything is warehouse scale. You can serve tens of millions of customers from a single machine.


Not on HN. Where everyone uses Rust and yet needs a billion node web scale mesh edge blah minimally otherwise you are doing it wrong. Rather waste 100k per month on aws because ‘if the clients come downtime is expensive’ than just run a 5$ vps and actually make a profit while there are not many clients. It’s the VC rotten mindset. Good for us anyway; we don’t need to make 10b$ to make the investors happy. Freedom.


Yeah, that's part of it. The other part is that you can move stuff that is working, and working well, into on-prem (or colo) if it is designed well and portable. If everything is running in containers, and orchestration is already configured, and you aren't using AWS or cloud provider specific features, portability is not super painful (modulo the complexity of your app, and the volume of data you need to migrate). Clearly this team did the assessment, and the savings they achieved by moving to on-prem was worthwhile.

That doesn't preclude continuing to use AWS and other cloud service as a click-ops driven platform for experimentation, and requiring that anything that is targeting production to refactored to run in the bare-metal environment. At least two shops I worked at previously have used that as a recurring model (one focusing on AWS, the other on GCP) for stuff that was in prototyping or development.


> Yeah, that's part of it. The other part is that you can move stuff that is working, and working well, into on-prem (or colo) if it is designed well and portable.

That's part of the apples-and-oranges problem I mentioned.

It's perfectly fine if a company decides to save up massive amounts of cash by running stable core services on-premises instead of paying small fortunes to a cloud provider for the equivalent service.

Except that that's not the value proposition of a cloud provider.

A team managing on premises hardware barely covers a fraction of the value or flexibility provided by a cloud service. That team of Linux sysadmins does not nor will it ever provide the level of flexibility nor cover the range of services that a single person with access to a AWS/GCP/Azure account provides. It's like claiming that buying your own screwdriver is far better than renting a whole workshop. Sure, you have a point if all you plan on doing is tightening that screw. Except you don't pay for a workshop to tighten up screws, and instead you use it to iterate over designs for your screws before you even know how much load it's expected to take.


Counterpoint: most shops do not need most of the bespoke cloud services they're using. If you actually do, you should know (or have someone on staff who knows) how to operate it, which negates most of the point of renting it from a cloud provider.

If you _actually need_ Kafka, for example – not just any messaging system – then your scale is such that you better know how to monitor it, tune it, and fix it when it breaks. If you can do that, then what's the difference from running it yourself? Build images with Packer, manage configs with Ansible or Puppet.

Cloud lets you iterate a lot faster because you don't have to know how any of this stuff works, but that ends up biting you once you do need to know.


> Counterpoint: most shops do not need most of the bespoke cloud services they're using. If you actually do, you should know (or have someone on staff who knows) how to operate it, which negates most of the point of renting it from a cloud provider.

Well said! At $LASTJOB, new management/leadership had blinders on [0][1] and were surrounded by sycophants & "sales engineers". They didn't listen to the staff that actually held the technical/empirical expertise, and still decided to go all in on cloud. Promises were made and not delivered, lots of downtime that affected _all areas of the organization_ [2] which could have been avoided (even post migration), etc. Long story short, money & time were wasted on cloud endeavors for $STACKS that didn't need to be in the cloud to start, and weren't designed to be cloud-based. The best part is that none of the management/leadership/sycophants/"sales engineers" had any shame at all for the decisions that were made.

Don't get me wrong, cloud does serve a purpose and serves that purpose well. But, a lot of people willfully ignore the simple fact that cloud providers are still staffed with on-prem infrastructure run by teams of staff/administrators/engineers.

[0] Indoctrinated by buzz words [1] We need to compete at "global scale" [2] Higher education


> Yeah, that's part of it. The other part is that you can move stuff that is working, and working well, into on-prem (or colo) if it is designed well and portable. If everything is running in containers

Anyone who says that hasn’t done it at scale.

“Infrastructure has weight”. Dependencies always creep in and any large scale migration involves regression testing, security, dealing with the PMO, compliance, dealing with outside vendors who may have white listed certain IP addresses, training, vendor negotiations, data migrations etc.

And then even though you use MySQL for instance, someone somewhere decided to do a “load data into S3” AWS MySQL extension and now they are going to have to write an ETL job. Someone else decided to store and serve static web assets to S3.


I mean, aside from my current role in Amazon, my last several roles have been at Mozilla, OpenDNS/Cisco, and Fastly; each of those used a combination of cloud, colo and on-prem services, depending on use cases. All of them worked at scale.

I specifically said "if it is designed well", and that phrase does alot of heavy lifting in that sentence. It's not easy, and you don't always put your A-team on a project when the B or C team can get the job done.

The article outlines a case where a business saw a solid justification for moving to bare metal, and saved approximately 1-3 SDE (depending on market) salary in doing so.

That amount of money can be hugely meaningful in a bootstrapped business (for example, for one of the businesses my partner owns, saving that much money over COVID shut-downs meant keeping the business afloat rather than shuttering the business permanently).


I didn’t mean to imply that you have worked at scale, just that doing a migration at scale is never easy even if you try to stay “cloud agnostic”.

Source: former AWS Professional Services employee . I just “left” two months ago. I now work for a smaller shop. I mostly specialize in “application modernization”. But I have been involved in hairy migration projects.


Most folks aren't focused on portability. Almost every custom built AWS app I've seen is using AWS-specific managed services, coded to S3, SQS, DynamoDB, etc. It's very convenient and productive to use those services. If you're just hosting VMs on EC2, what's the point?


I worked for a large telco, where we hosted all our servers. Each server ran multiple services bare-metal - no virtualization, and it was easy to rollout new services, without installing new servers. I missed the level of control over network elements and servers, the flexibility, and ability to debug by taking network traces anywhere in the network in my next job using AWS.


hrm, did you read the article?

> Our choice was to run a Microk8s cluster in a colocation facility

they go on to describe they use helm as well. there's no reason to assume that "a a fully instrumented service with logging and metrics" still isnt a click and keypress away.

your points dont make a whole lot of sense in the context of what they actually migrated too.


Emm… run proxmox?


> "I also never seen an eng org where substantial part of it didn’t do useless projects that never amount to anything"

Bootstrapped companies generally don't do this btw. This is a symptom of venture backed companies.


Absolutely not my experience. I used to work for a Japanese company that was almost entirely self funded. They wouldn't even go to the bank and get business loans.

Your description applies to a substantial number of business units in that company. They also had a "research institute" whose best result in the last decade was an inaccurate linear regression (not a euphemism for ML).


I think if youre at the size of "business units" youre not "bootstrapping" any more


You've never had friend and colleagues working at big (local and international) established companies sharing their experience of projects being canned, and not just repurposed?


Don't do what? go cloud? Sure they do, they just generally don't get cloud credits so they can get hooked.


Apologies - I edited my comment to clarify context.

> I also never seen an eng org where substantial part of it didn’t do useless projects that never amount to anything


there's nothing about being bootstrapped vs venture backed that lets everyone know, a priori, whether a given project will be successful or not. something like 80% of startups fail within the first two years.


That is having your cake and eat it. AWS administrators don't do the same job as on prem administrators.


Well yeah, that's why they're more expensive.


> I also never seen an eng org where substantial part of it didn’t do useless projects that never amount to anything

Name one of business (tech or non tech) where this is ok/accepted and competitive in capitalism.

How long will we keep making these inflated salaries while being known for being wasteful, globally speaking?


What is that jockey doing on the horse? We only pay him to race!


It’s not like other industries (or academia) are any better. Please.


They also saved the salaries of the team whose job was doing nothing but chasing misplaced spaces in yaml configuration files. Cloud infrastructure doesn't just appear out of thin air. You have to hire people to describe what you want to do. And with the complexity mess we're in today it's not at all clear which takes more effort.


sorry, what? theyre running on k8s and using helm... so there's still piles of yaml. its wild to conflate migrating to bare metal with eliminating yaml-centric configuration.


100% this. Cloud is a hard slog too. A different slog though. We spend a lot of time chasing Azure deprecations. They are closing down a type of MySQL instance for example for one which is more “modern” but from the end user point of view it is still a MySQL server!


To manage a large fleet of physical servers, you need similar ops skills. You're not going to configure all those systems by hand, are you?


They spent $150,000 on physical servers. Probably 1 or 2 racks. Not much of a 'fleet'.


I mean, “fleet” semantics aside, surely $150,000 of servers is enough for at least one full-time person to be maintaining them. The point is that there are absolutely maintenance and ops costs associated with these servers.


There's maintenance cost to everything that runs a userland.

What we're missing is tracking what time is spent managing hardware and firmware, (or even network config, if we're being generous) and how much time is being spent on OS config.

From personal experience (as a sysadmin before it was entirely unsexy as a term) the overwhelming majority of my ops work was done in userland on the machine, maybe something like 96-97% of my tasks were nothing to do with hardware at all.

Since I got rebranded as SRE, the tools and the pay sure did get a lot better in the time, but the job is largely similar and ultimately running in VMs does make deployment faster, but once deployed I find the maintenance burden to be the same (or perhaps a little more) as things seem to become deprecated or require changes from our cloud vendor a bit more often.


Depends on the size of the fleet.

If you're using less than a dozen servers manual configuration is simpler. Depending on what you're doing that could mean serving a hundred million customers. Which is plenty for most business.


A dozen servers would be pushing it. It's not the size of the fleet, it's the consistency and repeatability of configuration and deployment. I assume there are other, non-production servers: various dev environments, test/QA. How do you ensure every environment is built to spec without some level of automation? It's the old "pets" vs "cattle" argument.

I've worked at companies with their own data centers and manual configuration. Every system was a pet.


Exactly. Last job I worked at there was always an issue with the YAML… and as a “mere” software engineer, I had to wait for offshore DevOps to fix, but that’s another issue.


If you are waiting on a “DevOps department”, it isn’t a DevOps…it’s operations


My company called them the DevOps Team


Did you try and fix it yourself? Was that not allowed?


Yeah that was not allowed. We started off managing our own DevOps in our own AWS region even, separate from the rest of the company. Eventually the DevOps team mandated we move to the same region as everyone else, and then soon after that there was enough conflict with my (offshore) team and DevOps (onshore) that DevOps demanded and got complete control over our DevOps, though not infrastructure.


CDK exists. Haven’t had to use yaml in ages and I refuse to.


Getting a bare metal stack has interesting side effects on how they can plan future projects.

One that's not immediately obvious is to keep on staff experienced infra engineers that bring their expertise for designing future projects.

Another is the option to tackle project in ways that would be to costly if they were still on AWS (e.g. ML training, stuff with long and heavy CPU load).


A possible middle-ground option is to use a cheaper cloud provider like Digital Ocean. You don't need dedicated infrastructure engineers and you still get a lot of the same benefits as AWS, including some API compatibility (Digital Ocean's S3-alike, and many others', support S3's API).

Perhaps there are some good reasons to not choose such a provider once you reach a certain scale, but they now have their own versions of a lot of different AWS services, and they're more than sufficient for my own relatively small scale.


That’s the niche DigitalOcean is trying to carve out. I’ve always loved and preferred their UI/UX to that of AWS or Azure. No experience with the CLI but I would guess it’s not any worse than AWS CLI.


How would you compare it to google cloud run? thanks


Yep and hardware is only getting cheaper. Better to just buy more drives/chips when you need them.


M4.xlarge (m4.xlarge, 4cpu, 16gb, no storage) is about $100 per month. This cpu is 7 years old(slow! and power hungry) and has 36 threads. This means it runs 18 of these instances.

The total revenue so far for one cpu is 100x18x12x7 = $150k If used as a spot instance it’s 144/month, so about 200k

A standard i9-14700k gen has 32 threads, but it can run 12 of these instances (max 192mem). This CPU will cost you $800. Memory is cheap, so for about 1-2k you’re all set, and have a machine that’s way faster and cheaper.

Basically, buy a bunch of NUCs and you’re saving yourself around $1500 per month per NUC. It pays itself back in 1 month

Cloud hosting is —insane—

Not even touching memory ballooning for mostly idle applications.

Lastly, don’t give me reliability s an argument. These were all ephemeral instances that have no storage, so you’ll have to pay for that slow non-nvme storage platform.


If you can do it for some much less than Amazon (and all the other cloud vendors), then why don’t you create your own cloud and undercut them?


You wouldn't ask people who cook for themselves why they don't open a restaurant to undercut the competition.


Hosting for the world is different than hosting for yourself.


I'm sure they are also getting better performance as well.

Not sure how to factor that $ into the equation.


Also, I'd imagine most companies can fill unused compute with long-running batch jobs so you're getting way more bang for your buck. It's really egregious what these clouds are charging.


To get real savings with a complex enough project you will need one or more FTE salaries just to stay on top of AWS spending optimizations


Plus...

2x FTEs to manage the AWS support tickets

3x FTE to understand the differences between the AWS bundled products and open source stuff which you can't get close enough to the config for so that you can actually use it as intended.

3x Security folk to work out how to manage the tangle of multiple accounts, networks, WAF and compliance overheads

3x FTEs to write HCL and YAML to support the cloud.

2x Solution architects to try and rebuild everything cloud native and get stuck in some technicality inside step functions for 9 months and achieve nothing.

1x extra manager to sit in meetings with AWS once a week and bitch about the crap support, the broken OSS bundled stuff and work out weird network issues.

1x cloud janitor to clean up all the dirt left around the cluster burning cash.

---

Footnote: Was this to free us or enslave us?


Our experience hasn’t been THAT bad but we did waste a lot of time in weekly meetings with AWS “solutions architects” who knew next to nothing about AWS aside from a shallow, salesman-like understanding. They make around $150k too, by the way. I tried to apply to be one, but AWS wants someone with more sales experience and they don’t really care about my AWS certs


As an AWS Solution Architect (independent untethered to Bezos) I resent that comment. I know slightly more than next to nothing about AWS and I can Google something and come up with something convincing and sell it to you in a couple of minutes!


How do I make $150k (or more) having high-level conversations about AWS with senior software engineers? Seriously. I’m sure it’s not an “easy” job, but fuck if I make less actually writing the software (median SWE salary is something like $140k in the US- depends on who you ask but it’s not the $250k+ that Levels.fyi would lead you to believe).


I can guarantee an SA working for AWS makes more than $150k. A returning L4 intern makes that much (former AWS Professional Services employee)

And no one cares about AWS certifications. They are proof of nothing and disregarded by anyone with a modicum of a clue.

I’m speaking as someone who once had nine active certifications and I believe I still have six active ones. I only got them as a guided learning path. I knew going in they were meaningless.


I’m sure everyone here is aware of the colloquial hate that certs get outside of IT (“A+”, etc.) , even I.

What I don’t know is why AWS would rather pay a salesman $150k (or more… I looked up salaries a few months ago, but either way…) to sell the wrong things to customers, rather than have a software engineer who has actually used these products, sell the right thing to customers. I should hope that all AWS Solutions Architects need to pass the cloud fundamentals exam before interacting with customers, but maybe not?

Deming is rolling in his grave.


There are different types of SAs at AWS. There are the generalist SAs who I never worked with and the specialist SAs who have deep experience in a specific area of the industry - not just AWS.

And even they aren’t to be confused with “consultants”. SAs are free to customers and give general guidance and are not allowed to give the customers any code.

Consultants are full time employees at AWS who get paid by the customer to do hands on keyboard work. But even we couldn’t work in production environments. We did initial work and taught the customer how to maintain and enhance the work.

If you don’t know the cloud fundamentals, learning enough to pass a few multiple choice questions.

As an anecdote, I passed the first one - the Solution Architect Associate - before I ever opened the AWS console.


Thanks for the detailed info.

I’m aware the bar is low when it comes to the entry-level certs, and that’s why I’d hope AWS SAs (the free kind) have to pass one or two.


This cracked me up. I was "asked" to get some AWS certs since I joined a company that was an AWS Partner. We have a new VP that is forcing other people to get them. Big waste of time for all practical purposes.


So to be clearer.

Having an AWS certification is not a requirement or even that important to get a job at AWS in the Professional Services department. Depending on your job position you are required to have certain certifications once you get there.

I now work for a partner and you are required to have a certain number of “certified individuals” to maintain partnership status. But even then, certifications never came up in my three interviews after getting “Amazoned” a couple of months ago.

But then again, after having AWS ProServe on my resume and having been a major contributor to a popular open source project in my niche, door opened for my automatically.


I didn't mind getting the certifications to "help out" the company, I just find it such a racket: paying for courses, buying books, $200 tests. Some people take months preparing for that stuff! I didn't buy any courses and only spent a few days preparing, but others spend tons of time and money on it.

And based on my own personal interactions with other "certified" individuals, it doesn't actually mean anything.


> Footnote: Was this to free us or enslave us?

I assume whichever provides more margin to Jeff Bezos.


Where I work (hint: very large satellite radio company) this is very much a thing.


I was thinking the same thing. If the migration took more than one man-year then they lost money.

Also what happens at hardware end-of-life?

Also what happens if they encounter an explosive growth or burst usage event?

And did their current staffing include enough headcount to maintain the physical machines or did they have to hire for that?

Etc etc. Cloud is not cheap but if you are honest about TCO then the savings likely are WAY less than they imply in the article.


> If the migration took more than one man-year then they lost money.

Your math is incorrect. The savings are per year. The job gets done once.

> Also what happens at hardware end-of-life?

You buy more hardware. A drive should last a few years on average at least.

> Also what happens if they encounter an explosive growth or burst usage event?

Short term, clouds are always available to handle extra compute. It's not a bad idea to use a cloud load-balancing system anyway to handle spam or caching.

But also, you can buy hardware from amazon and get it the next day with Prime.

> And did their current staffing include enough headcount to maintain the physical machines or did they have to hire for that?

I'm sure any team capable of building complex software at scale is capable of running a few servers on prem. I'm sure there's more than a few programmers on most teams that have homelabs they muck around with.

> Etc etc.

I'd love to hear more arguments.


The job is never done once. Not in hardware.


> Also what happens if they encounter an explosive growth or burst usage event?

TFA states that they maintain their AWS account, and can spin up additional compute in ~10 minutes.


Freedom from vendor lock in is hard to put a value on, but for me it’s definitely worth a mid level engineers salary in any context.


Yes, as long as you ignore the other literally 100 or so SaaS products that the average enterprise uses. You are always locked in to your infrastructure at any decent scale


If we saved 35% that could hire 20 FTEs.

Not that we'd need them as we wouldn't have to write as much HCL.


outside of the US you could likely pay for two mid or three juniors for this though, e.g. in France a junior engineer in an average city would likely be around 46-48k$USD for the total employer cost and France is already expensive compared to a lot of other countries with talented engineers


And the servers are brand new, so maintenance is not factor in yet.


They probably need to now hire 24/7 security to watch the bare metal if they're serious about it, so not sure about that engineer


onsite security is offered by the colo provider. You can also pay for locked cabinets with cameras and anti-tampering or even completely caged off depending on your security requirements




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: