... so, basically, just how we ran an IT services department in the 1990s and early 2000s.
Except that:
Build servers took a day or so depending on the approval chain.
Hardware could be leased, or the capital expendite written off over 3 years, plus it came with a 4-hr onsite service warranty (if you worked with the right partners), and it being a capital, rather than operational, cost had bottom-line benefits.
24/7 service coverage for major incidents was very doable if you planned correctly, plus you owned the kit and so could control how you brought in resources to support incident recovery, rather thank waiting for your service provider to put the green dot back on their service status page.
//CSB The max total outage time for one corporate where I ran the IT dept was 12 minutes in 3 years, while we swapped out an HP battery-backed disk cache controller that took itself offline due to a confidence check failure.
aye - and it took time to setup all of those things, time to maintain the gear, delays for business/dev teams while the IT department made sure they knew how to run something stably.
> Build servers took a day or so depending on the approval chain.
This would only be true in a large shop with cold spares or virtualization. Server hardware generally has a 6-12 week lead time. The exception being if you are paying out the nose to a reseller who could do faster delivery.
Just imagine the time it took to setup Nagios or Zabbix for monitoring. In a small shop you are probably talking about at least 1-3 days of work + calendar time for hardware. Add to that some time for dealing with scale of metrics storage etc. depending on the shop.
Except that:
Build servers took a day or so depending on the approval chain.
Hardware could be leased, or the capital expendite written off over 3 years, plus it came with a 4-hr onsite service warranty (if you worked with the right partners), and it being a capital, rather than operational, cost had bottom-line benefits.
24/7 service coverage for major incidents was very doable if you planned correctly, plus you owned the kit and so could control how you brought in resources to support incident recovery, rather thank waiting for your service provider to put the green dot back on their service status page.
//CSB The max total outage time for one corporate where I ran the IT dept was 12 minutes in 3 years, while we swapped out an HP battery-backed disk cache controller that took itself offline due to a confidence check failure.