You seem to say it's not worth writing a lot of tests, but then you talk about tests breaking due to bad changes - if you don't write those tests in the first place, then how do you get into that situation?
I didn't word my earlier comment very well - I don't mean to advocate for 100% coverage, which I personally think is a waste of time at best, and a false comfort at worst. Is this what you're talking about? What I wanted to say is that unit tests should be written for every part of the code that you're testing, i.e. break it into bits rather than test the whole thing in one lump, or better, do both - unit tests and integration tests.
Write a lot of tests - mostly integration. Unit tests have proven more harmful than helpful - unit tests are great when the api is used so often it would be painful to change it so you don't anyway. Otherwise I want to change the api and the code that uses it as requirements change. When I'm writing string or a list I'd unit test that - but mostly that is in my standard library so I'm not. Instead I'm writing code that is only used a few places and those places both will change every few years as requirements change.
I'm sorry I don't really understand your argument; unit tests are bad because code changes? Surely if you're only testing small pieces then that works better when you make a change? You only need to update the tests for the part of code that changed, not everything that touches it. That's another great thing about mocks - your interfaces should be stable over time, so the mocks don't need to change that often, and when you're changing MyCalculator, all the dependencies to MyCalculator remain the same so you only have to update MyCalculatorTests.
This is what I mean about a well-factored code base; separate things are separate, and can be tested independently.
> Unit tests have proven more harmful than helpful
Why? I find them very useful. If I change a method and inadvertently break it somehow, I have a battery of tests testing that method, and some of them fail. Nothing else fails because I didn't change anything else, so I don't need to dive into working out why those other tests are failing. It's separation of concerns.
i have concluded a unit needs to be large. Not a single class or function but a large collection of them. When 'archtecture astronaughts' draw their boxes they ar drawing units. Often thousands of functions belong to a unit. even then though often it is easier to use the real other unit than a test double.
You can make your own definition of unit if you like, but the usual definition is the smallest whole piece of something - a unit of currency, a unit of measure, etc.
If your unit is thousands of functions wide then you have a monolith, and there are widely discussed reasons why we try to avoid those.
You seem to say it's not worth writing a lot of tests, but then you talk about tests breaking due to bad changes - if you don't write those tests in the first place, then how do you get into that situation?
I didn't word my earlier comment very well - I don't mean to advocate for 100% coverage, which I personally think is a waste of time at best, and a false comfort at worst. Is this what you're talking about? What I wanted to say is that unit tests should be written for every part of the code that you're testing, i.e. break it into bits rather than test the whole thing in one lump, or better, do both - unit tests and integration tests.