> This misses the induced demand effect of dramatically reducing the cost of the task. There are many things that only happen occasionally because they are annoying and slow. If you reduce the friction suddenly everyone does it 10x per day and the whole company benefits from faster feedback loops.
But that assumes the task is still valuable at doing it 10 times more a day.
One key distinction to think about might be if your task is letting you reduce cost vs increase revenue.
A task that is a "cost" - e.g. if a user wants X, we need to do Y - likely won't need to be done 10x more frequently with 10x increased value if the demand for X hasn't changed. So you make it cheaper for us to do Y when X is desired, so the margin for X is increased, which still might make a ton of business sense, but the top-line boost to profitability is limited to the original manual cost of Y.
A task that is revenue-driving - e.g. "we have to do X any time we're putting together a sales deck for a new prospect" - can have a much higher flywheel effect. Can our existing sales team potentially now bring in 4 times more clients? That could be huge, and so you've both increased margin and top line.
It doesn't need to be as valuable to still be net positive though, because now it doesn't take human time, just computer time.
Imagine for an ML product, making an accuracy report. If it's slow and required lots of human time, you might do it once a quarter for releases to important customers. If it's cheap and quick then you can run it on every CI run to check for regressions before merging code. Sure, you run it maybe 1000x more and don't get 1000x the value.
But, critically, the value is not the cost savings of not having to run it manually per quarter, the value is the more stable product and avoiding spending time bisecting a quarter of engineering work to figure out where bugs were introduced. And this was enabled by automating.
Sure, if the cost of automating here is < the current cost of rework and investigation, you win. And dev time is expensive, so that sort of thing is usually an easy call.
Yeah, it doesn't need to continue to increase in value linearly with repeated runs, it's the summed value that matters.
The "cost" is fuzzy too, often - e.g. time and budget spent on reliability-focused engineers or active troubleshooting rarely drops to 0 if you don't automate anything. It might just make it more expensive to react to incidents!
Maybe turn it away from a "gate" question - "should this thing be automated" - and into a prioritization one - "we could automate so many things, which ones should we do first?"
But that assumes the task is still valuable at doing it 10 times more a day.
One key distinction to think about might be if your task is letting you reduce cost vs increase revenue.
A task that is a "cost" - e.g. if a user wants X, we need to do Y - likely won't need to be done 10x more frequently with 10x increased value if the demand for X hasn't changed. So you make it cheaper for us to do Y when X is desired, so the margin for X is increased, which still might make a ton of business sense, but the top-line boost to profitability is limited to the original manual cost of Y.
A task that is revenue-driving - e.g. "we have to do X any time we're putting together a sales deck for a new prospect" - can have a much higher flywheel effect. Can our existing sales team potentially now bring in 4 times more clients? That could be huge, and so you've both increased margin and top line.