Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Do you have any place you can point for that?

Anecdotally, I have found that when we account for...

- Human resources costs (payroll, taxes, benefits) [1]

- Time to market for new features / products

- Utilizing reserved instances where it makes sense

- Appropriately sizing machines

We get with AWS...

- Faster time to market (accelerated revenue)

- Relatively the same cost per month (cheaper in some areas more expensive in others)

- Significantly lower initial investment

- Increased redundancy (via many small servers vs. a few large servers) / decreased disaster recovery times (and in many cases automated recovery)

I'm not saying you're wrong, that's just what I've seen when we run the numbers internally. You may have seen differently which is why I ask.

[1] A devops person to manage servers dedicated or not can cost $80-$125k+ a year after benefits and taxes. That is a lot of AWS instances. And we have found we need an IT staff about half the size to manage AWS vs a dedicated data center.



For my last start-up, I moved off EC2 to a dedicated vSphere cluster with a hosted provider. vSphere has an API, so adapting existing provisioning & deployment code was quite straightforward (I used the fog gem in Ruby). I basically treated the vSphere cluster very similarly to EC2 (small root stores, attached volumes, etc). Granted, I did have to give up the benefit of being in different regions.

I found the maintenance burden dropped substantially. It may have just been that I was running on newer hardware, but vSphere has built-in HA features such that VMs will just migrate between hosts when hardware degrades. In the two years I ran that setup, I never lost a VM.

I also had dedicated, modern hardware -- no need to worry about CPU steal. I could create instances/VMs of any size I want. If I needed to resize a VM, I could do so without losing all the data on it. As long as I had capacity, I could add new VMs at no extra cost. When I needed more capacity, I'd just call up and have them add a new blade to the cluster, which both increased my redundancy and gave me extra capacity. Essentially, the cost curve, while tied to usage, was much more favorable than linear growth once you get over the base setup costs.

On top of that, I had a real console to each of the VMs should anything go wrong. And if I couldn't fix something myself, there was dedicated staff at the colo facility I could just call up.

It's not for everyone, but there are options out there that give you an EC2-like environment with many of the benefits of having your own hardware.


The tradeoff for CPU stealing is that now you have multiple services running on the same machine so if the machine goes down you lose them all. On AWS it is very unlikely that two of your VMs end up on the same physical machine.

Also, I actually can't recall (in 8 years of AWS) a time where I saw CPU being stolen. Perhaps I've been lucky. I also don't watch for it constantly.

We actually did this with our dev servers. Although switch VSphere for VirtualBox on Linux and colo for a server room in our office. It worked great and saved a TON of cost. But we don't need pesky things like 24hr uptime and dedicated bandwidth for dev servers.


Well, this is what I was referring to with the HA features of vSphere. It works basically like a compute level RAID. The VMs were stored on a SAN and vSphere monitored each of the compute blades. If one went down, the VMs were seamlessly migrated to a hot spare blade. The vSphere docs claim this can be achieved with zero packet loss -- a claim I was never able to test or verify. If you're worried about losing multiple machines, just add multiple hot spares.

Of course, this doesn't help if you lose an entire rack or the data center. I concede this was a trade-off. But given how many times an entire region went down when I was on EC2, I was satisfied with the risk based on the colo environment's uptime record. The provider did offer another facility, but the latency between the two was too high to be of practical use in a failover without keeping a completely mirrored configuration in both locations.

It sounds like you have some experience with vSphere, so I don't intend to be patronizing. But there's a huge difference between "enterprise" virtualization and what you get with an ad hoc setup using desktop virtualization tools.


> The tradeoff for CPU stealing is that now you have multiple services running on the same machine so if the machine goes down you lose them all. On AWS it is very unlikely that two of your VMs end up on the same physical machine.

Yes. But its also very likely that when AWS has issues, the entire region is going to be having problems, like last Saturday (IAM, EC2, Autoscaling, etc broken badly for 6 hours).

> But we don't need pesky things like 24hr uptime and dedicated bandwidth for dev servers.

You're not getting this SLA unless you're managing everything yourself and are globally redundant. AWS doesn't provide bandwidth guarantees, nor will you get 100% uptime.


True. You have to design for failure with AWS. Which does have a large amount of cognitive overhead. If you don't design for failure you're going to have a bad time.

However, designing for strict SLAs is not impossible with AWS. You just need to have multi-region redundancy and you can get very very good with multi-availability zone redundancy.

I have no excuse for the IAM outage, it sucked for our ops team. I guess my only two consultations are:

1. We haven't had a customer-visible outage due to AWS in years because we follow best practices (which does cost more - but see my previous point on more smaller machines vs many large machines)

2. If we were running our own authentication and access control system similar to IAM, it too could have an outage.

But I agree, that is a bad thing about AWS.


Being on AWS does not necessarily mean that you don't need a devops person - especially at the scale where not being on AWS actually makes a difference to your margins.

I have seen quite a few people move off AWS successfully onto bare-metal leased hardware. S3 is just about the only service that's difficult to find an alternative for. Personally, I find using something like DynamoDB no different than using an Oracle DB - it's a vendor lock-in. Unless you had enterprise-level support on AWS (which costs a lot) - if you run into issues with Amazon's proprietary services, then good luck to you.

AWS is great to get started, but once you know that you're going to need scale (and lots of infra), it's best to move.

I say all of this as someone who has extensive operating experience on AWS. YMMV.


Oh, with cloud services you defiantly need devops.

Without AWS you also need dedicated infrastructure, networking and hardware people as well as devops.

You need people who know how to configure cisco networking gear, who understand SAN's, iSCSI, Fibrechannel, Racks, blades lots of stuff even devops people don't think about.


What exactly is "DevOps" to you? I ask, because almost everyone has a different answer. I've been doing "DevOps" for more than ten years, and all of the items you listed as required skill-sets are within my capabilities and have been used at many of the places I've worked. I'd be hard pressed to call someone an ops person if they don't understand the basics of server hardware, networking, and storage. These are essential components which the system relies on, the same system you are responsible for the uptime of.

I often hear things like your statements and can't help but wonder if the general quality of ops people is so bad in our industry and I just haven't encountered it, or if the reason ops people are treated so poorly in most organizations is just that developers automatically assume we don't know anything rather than asking.


Devops for me would be things like puppet, networking(subnets, load balancing, firewalls etc) deployments, Cloud Formation templates, ARM Templates rather than directly setting up hardware.

Where a dedicated networking would know specifics of certain vender. You can make career out of just knowing how to set up CISCO hardware and CISCO's embedded os. Devop's people tend to be broader than that.


All of the items you listed fall under my definition of "DevOps" as well. I loosely define it in two ways:

1) "DevOps is a philosophy, not a title" (this is mostly because of managers thinking otherwise)

2) "DevOps is about focusing on automation of systems infrastructure to improve reliability, flexibility, and security."

Regarding #2, though, since my past experience includes building public clouds, my perspective does not limit "DevOps" to only utilizing public clouds. You can automate the build-out of physical hardware too. It's not really possible to automate rack n stack, but you can abstract that away through external logistics vendors that pre-rack/cable gear for you at a certain scalability point.

Things like OpenStack Ironic, Dell Crowbar, Cobbler, Foreman, etc. are definitely DevOps tools, yet they are specifically focused on handling automation of physical hardware deployments.

As a further example, many networking vendors now provide APIs, but even when they didn't they had SSH interfaces. It was very possible to automate the deployment of large quantities of networking gear using remote-execution tools like Ansible or even just Ruby or BASH scripts. There's no need necessarily to have a dedicated networking person.

Of course, as you scale up to a certain point in physical gear, it pays to have specialization. But that's true even in the cloud, where you may need to hire a specialist to deal with your databases, a specialist to deal with complexities of geographical scale/distributed systems, a specialist to deal with complex cloud networking (VPC et al). Just because something is abstracted away into a virtual space doesn't necessarily reduce its complexity or the base skillsets required to operate that infrastructure.


No. This is true for colocated, but absolutely false for dedicated.


You are correct! If you're just colocating, you need your own people to manage your gear. Dedicated equipment is managed by the service provider.

Disclaimer: Provided hosting services for ~8 years.


Companies that provide managed hosting for hardware tend be just as expensive as cloud providers.

These comparisons tend to be, oh look you can buy the the Dell Server for cheap, and shove it in data centre and it's a lot cheaper.

If you start going route where you ask the provider to do it, the charges tend to be a lot more. It usually the sort thing where you need to do a phone call before you can even get a quote.


Again, this is demonstrably false. What's commonly meant by "dedicated hosting" is essentially the same service level you get from EC2 - they they care of the network and hardware, you take care of the software (1).

The cost here is 2x-4x lower. In another comment, I quoted E5-2620 v3 w/64GB for $400 which is exactly this type of setup. This is offered by a company that's been in business for longer than AWS has existed and who, in web hosting circles, is well respected. You could definitely go cheaper. You could definitely go to IBM and rackspace and pay as much as EC2, sure...but there are literally thousands of providers in the US, which have been in business for over a decade that'll beat EC2/Rackspace/Softlayer by a wide margin.

Some that I've personally used: WebNX (LA), NetDepot (Atlanta), ReliableSite (NY), HiVelocity (Florida). I've also done it on the cheap with OVH (both Quebec and France) and Hetzner (Germany), as well as on the expensive side with Softlayer (you can negotiate softlayer down considerably even on a small order).

(1) That's disservice to dedicated hosting, because the quality of the network is often better, and you won't get termination emails from your hosting provider or noisy neighbours the way like you do on EC2.


Will those guys provide you with SQL Server cluster with at least 2 servers, Setup AlwaysOn, Setup failover clustering with Quorums, Optimal Disk Partition Alignment, DTC, setup subnets, ACLS to secure your cluster and lots of MSSQL Specific stuff I don't know about and then monitor it for you. Because that's what managed service is. And most ask a lot for this because of the specialists it requires.

If you pay for RDS, Amazon does this for you, and it would already be well setup below the application level. They have guys monitoring it, and keeping it up. And they can do this cheap because of the benefits of scale across all their customers.

If you just ask for machine than yes, that's going to be cheap. But your forgetting you need databases, Application delivery controllers, firewalls, VPN appliances. All of which may require niche vender knowledge to setup, and then they would charge a lot of consultancy fee's to design a solution for you. Amazon puts this stuff behind simple API's where you don't need to know all those vender specific skills.


I don't mean to be rude, but Github, Stackoverflow, and Wikipedia all run their own physical environments. Its easy to demonstrate that the cost savings you indicate in AWS don't exist.

AWS helps you prototype and iterate faster. It is not cheaper.


And I know a lots of companies that host on cloud, and made a cost saving after moving away from managed hosting because they were charging so much to look after it.

Github, Stackoverflow, and Wikipedia probably have very good dedicated specialists who salaries only make sense when you operate on their scale.


> And I know a lots of companies that host on cloud, and made a cost saving after moving away from managed hosting.

Can you provide a citation? Because AWS tools _are managed hosting_. They're just managed hosting without support (unless you're paying AWS for it on top of the service cost).


Well they tend to be smaller/medium size companies, because they don't have serious scale yet to hire specialists, but at the same time require reliable HA hosting. It can't just be some databases or web servers setup in a Adhoc fashion.

A lot of these articles that compare cloud vs dedicated only compare some web server in data centre.

They don't consider building a complete platform is a lot of work. You need to setup databases in highly available preformant fashion, backup solutions, off premise backups, ADC's(Netscalers for example), firewalls, subnets, Storage(SANS), ACLS, Site to site VPNS(or MPLS) etc which requires some knowledgeable people to setup. Companies that setup this for you, rightly charge quite a bit for it.


This is almost a standard product with a lot of providers :

http://www.postgresql.org/support/professional_hosting/north...

> If you just ask for machine than yes, that's going to be cheap. But your forgetting you need databases, Application delivery controllers, firewalls, VPN appliances. All of ...

What do vpn appliances have to do with database hosting ? I'm guessing you're an enterprise java developer ?


Notice how all of those companies present themselves as consultancies.

The rest just show virtual machine prices, dedicated machine prices etc. You have to dig around the websites to find anything about "complete" database solutions. They present you with a phone number and you will have to phone them up, and it won't be cheap. They will charge consultancy fees.

I'm not talking about database hosting. I'm talking about setting up a web hosting platform. A lot of companies would like secure access via VPN. Imagine you have some network appliance, you would probably want its admin console completely cut-off(via ACL) via public internet. You would access it via site to site vpn instead.


Having worked at one of these for ~6 years ... no they don't (well, it is true that you can get extra services of course, and I bet there are some that do charge, but in general, they don't).

For having a hosted + backed up database that's redundant (I hear master-slave with auto-promotion is quite common, mysql and postgresql are available) and that THEY will fix when it becomes unreachable, or cable cut or ... that's a standard product, and there's no fees for fixing it when it goes down.

All I know is that the most expensive dedicated customer we had (which we did a database for) paid less than $20k, and he had about 800000 DAUs (and is a well known site).

Also, in all the time I've been in the industry, I've known a single instance of hardware failure leading to data loss. I've heard of dozens of times of perfectly authorized accidentally erasing/corrupting the database (mostly because they wanted me to figure out how to repair it). This is the real threat, and will require consultancy to fix, given the average level of technical competency of cloud customers, both on cloud and dedicated.

Another one was a dating site that had less than half that in charges (no idea about DAUs), moved to amazon, balked at a bill that (after serious optimization and large customer discounts) exceeded $100k mostly moved back to us (only using amazon as backup location).

Cloud has a number of advantages, but price is definitely not one of them. And there's serious "fine print", like that most of the cloud advantages (like "not losing data when a physical machine dies") don't apply to small customers.

The big secret is that scaling customer apps generally runs into issues because the customer's programmers are not considering efficiency at all. Letting me loose on their code results in 20-30 TIMES improvement within hours, not because I'm that good, but because usually the first factor 10 is simply not getting the same data 10 times from the DB. No real difference between languages in the programmer's skill level (with exceptions for rare languages : Haskell programmers, for instance, are definitely better, but that will stop if it ever becomes popular). There are exceptions, of course, but this is the common situation. Having done 2 cloud migrations in the last year I worked at one of these managed hosting shops, scaling doesn't work any better on cloud, in fact it often works far worse because PHP's performance far exceeds what you get with ruby or python/django. When you exceed shared serving you can move up to managed/dedicated for a big performance boost (in fact the hoster will probably do this for free when you upgrade your plan), which is likely to last you for more growth. The issue is that PHP shared hosting in my experience often serves more customers than 4-5 dedicated django or RoR servers can (mostly because like all ORM, django's ORM means 40-50 database calls per page showing, RoR is no different). And unlike cloud, serving pictures or even video will result in your site slowing down, not in a $10k bill.

That said, there are things the cloud is good for. We fail badly for customers that require having truly large amounts of diskspace simply accessible (we're talking 10+ Tb or so, that's constantly accessed, before this becomes an issue), or if they need massive amounts of compute that needs to scale with very little warning. Even in those cases, the times I've seen they quickly decided that storing some files in the cloud through remote filesystems is far preferable to actually running the site on the cloud, due to cost. AWS lambda is pretty useful for this: quickly run some code on input, store result on S3, don't keep the remote task running. Uploading java code is easy, so if the task is processing/tagging/creating reports/pdfs/... it's easy to code up, test locally, and packaged as a jar it will work.


Ok, can you give me a link to a clear pricing structure that offers this?

We use C#, .Net platform, and write sql directly, so language performance isn't really an issue for us.

I've had disks blow up on me before, but didn't lose any data because of HA.

I've also had power supply issues before. If we had been running baremetal, the automatic VM migration could not have worked.


I would have agreed about specialization in older versions of MSSQL but they specifically made the HA sysadmin experience with AlwaysOn a cakewalk. I think any sysadmin with a decent amount of general experience could get it all figured out within 1-2 normal workdays (including your average interruptions).


you don't need any of this stuff, there are providers out there that will give you all of this for a cost less than amazon's.

the problem is nobody wants to do it because it's not sexy, or name brand / won't-get-fired-for-hiring-IBM.


Well you do want it, if your running 30 million a year website off it. A little amount of downtime can cost a lot.


talk about missing the point.

it's hard to sign up to a provider nobody has ever heard of. doesn't matter if you're making 30 mil or 3 thousand dollars.


B2 kills s3


Maybe once it leaves beta.


All valid points.

> Being on AWS does not necessarily mean that you don't need a devops person - especially at the scale where not being on AWS actually makes a difference to your margins.

Agreed. I didn't mean to imply that. On our scale it means we need 2 instead of 4. And for consulting business, 1 person can handle multiple smaller clients so each client doesn't need to take on the full cost.

Incidentally, since moving some of our self-managed servers to AWS services we have seen a drastic decrease in how often our on-call engineer is woken up at 3 AM. Which makes them happy. And happy employees are always a good investment. Admittedly, given enough time we could have made our stuff as resilient as AWS.

Also, being able to call Amazon support and get a second eye on things helps. Albeit, the support plan is a bit pricy. In our case $10k a year.

> I have seen quite a few people move off AWS successfully onto bare-metal leased hardware. S3 is just about the only service that's difficult to find an alternative for. Personally, I find using something like DynamoDB no different than using an Oracle DB - it's a vendor lock-in. Unless you had enterprise-level support on AWS (which costs a lot) - if you run into issues with Amazon's proprietary services, then good luck to you.

Vendor lock-in is a major issue. Which is why we used self-managed databases on AWS for years (vs RDS, Redshift, or DynamoDB). Our conclusion -- after a few years -- was in our case (YMMV) we could accept the vendor lock-in but make sure our code was abstracted in a way that make moving easier.

Plus with AWS's new Database Migration System moving databases off (or on) to AWS is pretty easy.

Also, there are actually some really good S3 alternatives now. Their names escape me. I'll look them up later. However, I've seen many companies use only S3, Glacier, and CloudFront but not EC2. Your servers don't need to be on AWS to use them, obviously.

Almost all AWS services have good open source alternatives. And we have spent the time to make sure system is architected and our code is written in a way that has a clear path to switching. Microservices really help here.

> AWS is great to get started, but once you know that you're going to need scale (and lots of infra), it's best to move. I say all of this as someone who has extensive operating experience on AWS. YMMV.

Maybe. But once you buy reserved instances in AWS the cost is pretty low. And when hardware fails in AWS it doesn't cost you more. (Assuming your architected without single points of failure) I've found a lot of people did the math before AWS lowered their costs and introduced reserved instances. Either way, this is why I said "cheaper in some ways and more expensive in others"


As for performance numbers, you'll get better performance from a $400/m dual E5-2620 v3 w/64GB of RAM than a $1200/m c4.8xlarge.

The E5-2620 will also include a lot of bandwidth (which alone could save you thousands of dollars a month), and significantly better I/O (>512GB SSD + 1 larger spinning, or maybe RAID with BBU).

The gap is probably worse right now as v5 chips are hitting the market.

Even at 100 servers, the price/performance difference doesn't cover the cost of 1 devops person. You're right. But I don't think AWS saves even a little devops time, unless you deeply lock-in.

I hope when you're considering the price, you are also factoring in time developers are spending on performance and architecture for AWS versus simply scaling up. Even if, as you say, AWS saves you 2 out of 4 devops roles, if it's costing 50 developers 10% of their time, you're way behind.


I guess in my particular case I don't need a dual E5-2620 v3 w/64GB of RAM.

The only way I could see needing that for our product is if we were virtualizing our own infrastructure.

But, AWS is not a 1 size fits all case. In that case, AWS probably doesn't make sense.

Side notes: you also need to consider the electricity, setup time, cost of hardware failure, networking access, rack space, etc. The hardware itself is not the most expensive part.

Side note two: where are you getting those prices? I've seen just the motherboard and 1u case cost close to that much. I'm assuming those are workstation prices not rack mount... but if they are rack mount please give me your supplier :)


Pick a smaller ec2 instance, and I'll find a comparable dedicated server that yields a similar price/perf gain (true, it might be more pronounced on the high end...).

Your two side notes tell me that you're conflating dedicated and colocated hosting. I can see now why you think EC2 saves you devops if you think colocated is your only choice.

I'm talking about dedicated hosting, so, no, I don't need to worry about electricity, setup time (not as much as you mean anyways), hardware failure, network access or rackspace. This is an extremely common model.

The price that I quoted I just got quickly from Hivelocity.com (1). The price is actually $300, not $400...I'm not affiliated with them at all. They've been in business for longer than AWS has existed. I could have gotten a similar price from a thousand companies.

(1) https://store.hivelocity.net/product/125/customize/1/


You are right, I was thinking colo more than dedicated.

Also, I'm not sure if everyone knows, you can get dedicated from AWS now. It is expensive, though.


Relatively the same cost per month

For us, AWS is not even remotely close in cost to a rack, hardware, and staff. The estimated spend for us to duplicate our dedicated rack setup in AWS is 3 times our monthly operational costs. And this does not include using any additional services. And to be clear, this includes operational staff, spare hardware, disaster recovery, etc.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: