That's a weird one, but surely an aws-nuke bug? It must already use a deliberate order - there's plenty of resources that need anything linked/constituent deleted first - so that order is/was just not correct for those?
I've never used AWS Batch but heard only good things about it. If you could elaborate - i.e. warn me - I would very much appreciate it. What should I know before using AWS Batch?
First is the problem as described. If youndelete the ComputeEnvironment Role you can't delete the compute environment itself.
If you male a mistake setting up the Batch ComputeEnvironment using cloidformation and it rollbacks then the cloidformation error message is useless (resource failed to stabilise) and no trace is left behind for you to check.
Two, Batch needs to use servicelinkedroles. If you create an AWS thing that needs a servicelinkedrole via the console it gets creates for you automatically. If you created via CLI or Cloidformation it does not. So if someone has created a batch env in your account before it will probably be there but if you are setting up in a new account you might have no clue whu things arent working.
Compute Environments and JobSpecs that use Fargate are configured differenrly, requiring a extra IAM role, from EC2 batch envs.
What I mean is 'yes that's a weird interaction' but, if that's how the AWS service works (however weird) then it's an aws-nuke bug that the ordering isn't correct (to account for that weirdness)?
Though 'not at all documented' makes that harder of course. I haven't used it.
But that's what I mean, if that's how it works, nuke needs to account for it. I assume it already does for other dependencies, because there are a lot of them - including that do make more (obvious) sense.
You're not the only one to complain about the documentation though, so it's easy to see how aws-nuke apparently missed it!