> You can't look at this in a vacuum. By make a general purpose datetime library more complex, you risk the introduction of new and different types of errors by users of the library that could be much worse than the errors introduced by missing leap second support.
Agreed, I can easily imagine that it could cause situation where some numeric value is not interpreted correctly and it causes a constant offset of 37 seconds. UNIX timestamps are entrenched, so deviating from it introduces misuse risks.
Regarding my use-cases, I agree that these ones should still work fine. I could also come up with issues where a 1s error is more meaningful, but they would be artificial. The main problem I can see is using some absolute timestamp instead of a more precise timer in a higher frequency context.
Overall, it's the general discussion about correctness VS "good enough". I consider that the extra complexity in a lib is warranted if it means less edge cases.
> Overall, it's the general discussion about correctness VS "good enough". I consider that the extra complexity in a lib is warranted if it means less edge cases.
Yeah I just tend to have a very expansive view of this notion. I live by "all models are wrong, but some are useful." A Jiff timestamp is _wrong_. Dead wrong. And it's a total lie. Because it is _not_ a precise instant in time. It is actually a reference to a _range_ of time covered by 1,000 picoseconds. So when someone tells me, "but it's not correct,"[1] this doesn't actually have a compelling effect on me. Because from where I'm standing, everything is incorrect. Most of the time, it's not about a binary correct-or-incorrect, but a tolerance of thresholds. And that is a much more nuanced thing!
[1]: I try hard not to be a pedant. Context is everything and sometimes it's very clear what message is being communicated. But in this context, the actual ramifications of incorrectness really matters here, because it gets to the heart of whether they should be supported or not.
Agreed, I can easily imagine that it could cause situation where some numeric value is not interpreted correctly and it causes a constant offset of 37 seconds. UNIX timestamps are entrenched, so deviating from it introduces misuse risks.
Regarding my use-cases, I agree that these ones should still work fine. I could also come up with issues where a 1s error is more meaningful, but they would be artificial. The main problem I can see is using some absolute timestamp instead of a more precise timer in a higher frequency context.
Overall, it's the general discussion about correctness VS "good enough". I consider that the extra complexity in a lib is warranted if it means less edge cases.