the general problem seems to be when a protocol uses the counter as a diversification component, it becomes another secret value to manage. I can't think of an example where it is and isn't, but only using an authenticated counter (via something like hotp or a ratchet) is a subtle design challenge.
The counter starts from an initialization value (IV), which does not need to be secret. What should be ensured (to prevent forgery attacks) is that each possible (key, initialization value) pair is never used twice.
There are two standardized ways (from NIST’s SP 800-38D) of ensuring unique (key,IV) pairs:
1. Use either a random number generator to make a nonce for the entire IV, or
2. Construct an IV by concatenating two fields, one field for the most significant bits set it to zero and increment it every time the key is used, the other is simply a fixed at zero.
With the second option, some new limitations exist to maintain the constructed IV’s uniqueness. The first being how many blocks of data you can encrypt in an operation (2^(size(fixedfield)) because you don’t want to overflow into your key-reuse field. The second is being the number of times you can reuse the key is now limited to 2^size(keyreusefield).
Why not just treat the entire IV as a big integer with that many bits? With AES-128 the IV is also 128 bits, which is effectively infinite. It's roughly the same number as the number of sand grains in the visible universe.
It's simple enough to implement as well, it's just two 64-bit additions with a check for wraparound on one of them.
reuse in implementations is common, and most protocols need to handle an asynchronous "offline" mode which means a sliding window of valid counters, and a thread to pull. the difference between a single use key and a limited use key is significant and has been known to get fudged somewhere between approved spec and implementation.
Sure don't reuse the IV and make sure it can't be easily derived or guessed, but if I'm looking for problems in an implementation, my point was that counter usage is one of the low-entropy threads to pull.
The GCM (or XSalsa, or...) nonce is not secret - in almost all protocols the nonce goes over the wire in plaintext! - but must be a number-used-once. Those are quite distinct requirements.
this is also notably true for pretty much all uses of IV/nonces/salts. They are intended to provide randomization to an otherwise deterministic process (similar to specific types of RSA/DSA padding) to avoid being able to brute force/easily determine the contents from known examples. For ex: known plaintext attacks being one common example.
It’s the same reason salts are used in passwords - there will be people using the equivalent of ‘password’, and without a salt it would be trivial to find them with just a lookup table (often in nearly O(1) time). With a salt it at least requires per-entry hashing of possible values per user, with no re-usability between users. So O(n^p) time (worst case) where n == number of users, p == number of possible values to test. A pretty huge difference.
Without the person on the other end having the value of the IV/nonce/salt, you might as well just stuff random bytes into the cyphertext field instead of the actual cyphertext, as if the algorithms are designed correctly they would be indistinguishable anyway.