> I can see how the type changing to a smaller one might cause problems, but I don't see how IshKebab's example could happen without exploiting implementation specific overflow behavior.
INT_MAX (or INT_LEASTN_MAX for annathebannana's suggestion) doesn't require exploiting overflow behaviour, but going from 2^15 iterations to 2^31 or 2^61 iterations may be problematic.
Problematic, yes, but not the difference between a terminating loop and an endless loop, which is what he said was the result of the size change. If that actually is the case I'd guess the problem was that the compiler for the latter platform was removing a zero comparison loop condition based on the assumption that a value that only ever increments can only ever be zero once, using the unspecified nature of ocerflows to its advantage, while the former did not.
And using INT_MAX or similar for what sounds like a timing loop is a whole other can of bad practice. Then the problem isn't that you used the wrong type, it's that you used the wrong value.
> Problematic, yes, but not the difference between a terminating loop and an endless loop
If the loop does significant work and was calibrated for an expectation of 65k iterations, stepping to 2 billion (let alone a few quintillion) is for all intents and purpose endless.
> And using INT_MAX or similar for what sounds like a timing loop is a whole other can of bad practice. Then the problem isn't that you used the wrong type, it's that you used the wrong value.
No objection there, doing that is making invalid assumptions, my point is that moving to exact-size integral does fix it.
> If the loop does significant work and was calibrated for an expectation of 65k iterations, stepping to 2 billion (let alone a few quintillion) is for all intents and purpose endless.
And if it's not, it's not, the point being that endless in this case would be meaningless outside it's literal meaning unless we know more about the specific case.
> No objection there, doing that is making invalid assumptions, my point is that moving to exact-size integral does fix it.
No, using an exact value fixes it. Any unsigned integer type is just fine for any integer value from 0 to 65535. If you change the type to a larger integer type without changing the supposed iteration count, the code would not have this problem, and if you changed the value to something higher than 65535 without adjusting the size of the type, you would have a different problem. Thus, the problem described here does not pertain to the type of the variable used.
INT_MAX (or INT_LEASTN_MAX for annathebannana's suggestion) doesn't require exploiting overflow behaviour, but going from 2^15 iterations to 2^31 or 2^61 iterations may be problematic.