Slowly, very slowly, DO is creeping towards having enough features to peel users off of AWS and GCP. Block storage, load balancers, and now blob storage...if you don't need managed queues or databases, you can probably save some serious coin by running services on DO.
S3 sets a high bar in terms of durability and availability, so it will be interesting to see how well DO can compete, and it will take a long time to gain the same level of trust that S3 has earned.
I wouldn't be surprised to see DO either roll out DaaS next or partner with someone (Compose.io?) to do it. That's one of two things (the other being VPC, which I know they're working on per HN posts elsewhere) that make AWS/GCP tempting over DO for me.
That said, I kind of hope it's not Compose.io - they seem ridiculously overpriced for PostgreSQL compared to AWS's and GCP's offerings.
Using managed queues (SQS) from AWS. The rest is on Vultr. That mix-and-match works perfectly for my use-case. In other words, no need to be able to run everything on a single platform to get cost gains.
Seed-stage startup here that would love to start using more of our (time limited) DO credit where our cluster requires running in a VPC. Any way to sign up for early access to VPC when it does roll out?
Most likely we will have a similar beta to what we've done with Block and Object storage, but it's still to early to provide a time line that isn't in quarters with a marge of error of 1-2 quarters.
If there's anything I've learned about object storage over the last year, it is: don't use object storage unless you need it for a particularly narrow use case of silly data volumes or CDN storage for a massively distributed content network. Anything else, forget it.
It rarely if ever works properly with standard Linux or Windows tools (s3), it has a rat's nest of arbitrary restrictions which require a language lawyer to decypher (s3/iam/vpc/roles), the APIs are vendor specific and sometimes even region specific (s3), the APIs are obtuse (s3 multipart), the clients are buggy (boto/boto3), suddenly you inherit extra costs and configuration requirements if you want to do something like expose it over http (route53/cloudfront/s3), credential storage is a nightmare for distribution compared to rsync/ssh etc. Ugh.
Please note I have used Google Storage as well and all of the above also apply.
The only thing that is positive is capital expenditure is low.
I think AWS CLI is pretty handy for lot of tasks. Not sure if you are referring to which multipart API of s3[1]. There is a higher level API for quite some time now which is very convenient. Agree about documentation for iam/vpc/roles being arcane but tbh I have no idea how that could be simplified.
I think I'm even more confused now. If you're going to have an app ssh or rsync, it's going to have a password or a private key to use that will be associated with some account, right? And if you want to use cloud storage, you'll also need a credential or a private key to use that will be associated with some account, right? What's the difference?
We roll out a public beta test with existing customers to ensure performance and reliability before doing a GA release. This allows customers to use the new product and provide feedback as well as help us to ensure a great level of service when the product goes full GA. So it's really a way for customers to test out new products early, but the specific details are specifically light so that if there are any major changes in the beta period we can roll them out.
Certainly the GA release will have all of that information, but given that large product releases like this are about a year of work, and part of the release schedule for us is a beta test with existing customers the introduction is basically to invite customers into the beta if they want to participate, and also that is the final step before GA release.
Customers who want to be a part of the beta are using the actual product that we are developing, but we don't commit to specifics publicly incase things change, which is why it's a beta release, not GA.
This might be possible because of the advances in software defined networking. Cloud firewall removes the hassles of setting up complex iptables in the name of network security and frees those extra cpu cycles and memory utilization.
DigitalOcean is getting better at solving common VPS use cases. I like it :)
We aren't providing pricing information till full GA which is later in the year. All beta users will be given 1TB of free object storage during the beta period.
Certainly if you are considering moving a production workload, then a beta test, regardless of the company, may not be an ideal fit.
Object storage has been a highly requested feature by our customers and now that we are getting closer to GA we want to ensure that customers get a chance to use the product and provide feedback on usability, bugs, performance, and etc.
Um. Ok, something is wrong with your UI. There is no options to use Object Storage after signing up and required to enter a credit card. The drop down menu font is blurry. Support page takes about 30 seconds to load.
This is an invitation for beta access, you sign up to request access and then and invitation to be a part of the beta access will be emailed separately.
they chose to make their own stack afew years ago instead of going with the already available and business friendly openstack, going by the same reasoning and the long amount of time they took to reach here, it highly likely to be a custom thing.
We started writing code for DO in the summer of 2011, we evaluated OpenStack at the time but felt there were four major problems.
1 - First was that it didn't really work. You could stand it up but DHCP licenses would fail, your VMs would go down, it just wasn't quite stable.
2 - Naively we thought it was a bit too complicated. Sometimes early on being naive is great and we certainly made a more simple system ourselves, however as time went on we realized, that our backend was beginning to resemble OpenStack in complexity, but we would still have more flexibility in hiding that complexity from our end users.
3 - OpenStack is designed for organizations but not necessarily to be multi-tenant. Taking any software and making it multi-tenant for different customers is a large effort, so coupling that with OpenStack not being mature, just seemed like a lot of effort to put into it.
4 - It wasn't really Open Source the way we were used to it. We were used to organic open source efforts by single developers or teams of developers that naturally grew and developed over time. OpenStack just looked like it was very much "corporate" sponsored open source, which wasn't something we felt comfortable putting our faith into.
My comment was a play on an old joke about Unix. My personal experience with OpenStack is that it's a moving target, complex and unstable, a combination of over- and under- engineering. The two large corporations that I know using it have required almost superhuman effort to keep their OpenStack environments up and running.
S3 sets a high bar in terms of durability and availability, so it will be interesting to see how well DO can compete, and it will take a long time to gain the same level of trust that S3 has earned.