I'd recommend aiming for 10-20% on those projects, and also for startups trying to rapidly push an MVP out.
Tests have diminishing returns. You want to hit the absolute most crucial ones that give you plenty of bang for buck and even save you time. That means finding the (usually small handful) of functions that implement your most crucial and most complicated business logic, and writing tests for them.
Anything past that is for companies with customers that need to maintain a certain level of quality and service. Worry about it when you get there.
Agreed. If you're a small start-up in a hurry and <4 devs, the best strategy would be to write top-down smoke tests on your API that will actually tell you when things break, even if not exactly where. Their actual derived coverage might be pretty high.
As long as that 10-20% is the more complex bits, and you are capable of picking out which bits are complex, it's IMO better to do that and spend the time saved on other shit that increases maintainability and reliability. Killing tech debt, killing your teammates, whatever.
This sort of basic level of decision-making for testing is something I wish I had, but all the tutorials and guides are about 100% code-coverage TDD so it's hard to find a path to to learn reasonable, high ROI testing.
You should test the thing that makes you money first, and should delay testing supporting functionality. For instance, if I am writing a document converter, the thing that makes me money is the AST -> AST conversion. Testing that should come before testing parsing (bytes -> AST) and rendering (AST -> bytes).
The place where you make money is the place that will have the largest demand for new and changing functionality. And where things change the most is where you need tests to protect against regressions.
Since what to test is a sliding scale, determining when and what to test is perhaps something that comes with exposure to good testers..
For me, a few references I use are "tests are a thinking help for specifying behavior", "tests hold behavior in place" and "test until you feel comfortable".
The first is a note that TDD thoughts are written by developers that write APIs, not by developers that write applications. TDD is a fantastic tool for an API designer, because they force you to think about the experience of using the API. So, whenever I design APIs, I like TDD. This is also a good argument for why you should minimize setup code that "goes behind the scenes" - if you're testing, say, a REST API, do as much of the setup and assertions as you can via the REST API as well!
The second helps me remember to think "a year from now, are all the behaviors this code needs to have obvious, or is someone likely to unintentionally break it?". I try to write tests that will flag if someone broke an important edge case - or the main use case! Tests can be used as the programmers equivalent of a carpenters' clamps, kind of.
The third is why I don't write as many tests anymore. I normally try to write one workflow-oriented feature test up front, like "When a user creates a new invoice, then that invoice should show up in the users list of invoices with the values they entered, plus x,y,z auto-generated parameters". As I implement the feature, if I come across a piece of logic that makes me feel uncomfortable - lots of branching, or code that's very important that it stays intact - I'll write a unit test or two to hold that in place; sometimes I don't write any unit tests, meaning I'd have written just one or two tests over the course of two or three days of implementation.
It varies wildly on the type of application you're building. I can only speak for front-end development of complex SPAs w/ React.
Generally, most architectures in this domain have a combination of UI components, a data store, a set of update logic for the data store, and a set of asynchronous controllers that respond to events, interact with APIs, and call the aforementioned update logic.
In React the UI components are declarative, they (generally) contain no logic or algorithms, just a mapping from state to DOM. I see basically zero value in testing these. Bugs are almost always of the 'forgot to actually implement' variety, or are related to the way the page is rendered in a particular browser, rather than the DOM output the components are responsible for.
The data store update logic is usually either simple setters/getters (which don't need testing) or complex data transformations (which do).
The controllers also come in simple and complex varieties. Simple ones (one API call, one data store update once it's resolved) don't need testing. Anything more complex than that probably does.
So those are the two main targets for testing in the apps I build. I generally don't bother with anything else.
There are exceptions though. For example, here's an accordion UI component I built which relies on an asynchronous manual DOM update after the React DOM update has finished resolving. This could almost definitely use tests, if only to help any maintenance developers understand what it's doing.
Basically, as long as you have some sort of sane architecture, there should only be a few potential targets for testing, and they should be easily identifiable.
Uncertainty. Look for the parts of the code where you feel you least sure you have the logic right.
A simple example of this is the bowling game kata: given the throws in a bowling game, calculate the (final) score. The 'hard' part is to keep strikes and spares in mind, including the bonus throws when a strike or spare is scored in the 10th frame.
If you were making an application that would help you keep track of your bowling score, that score calculator would have the highest ROI in terms of testing.
I've been working on a 1-man project with maybe 5% test coverage, just for some critical libraries that I ended up refactoring a few times. There's actually one little library that has 100% test coverage and super detailed error messages, because that's where many of the bugs seemed to happen.
I also have a simple integration test that just clicks through everything and makes sure nothing crashes.
Not having a lot of tests can be painful. Especially when you're learning new languages or frameworks, you almost always want to go back and rewrite some code (or just reorganize it into different files), and it's really nice to have tests when you do that. So sometimes that gives me the motivation to start writing up a bunch of tests, and then after that I dive in and refactor everything.