I would say C makes this sort of thing far more likely because it's usually a ton of effort to obtain suitable containers. In C++ or Rust they have plenty of things like `unordered_set`/`HashSet` built in, so people are much more likely to use it and not go "eh, I'll use a for loop".
In this case Git already had a string set, but it's still not standard so there's a good chance the original author just didn't know about it.
As noted in the comment, git did have a sorted string list with bisection search, and that's from 2008 (and it actually dates back to 2006 as the "path list" API, before it was renamed following the realisation that it was a generalised string list). Though as the hashmap proposal notes, it's a bit tricky because there's a single type with functions for sorted and functions for unsorted operations, you need to know whether your list is sorted or not independent of its type.
I've seen this exact problem in C code many times in my life, especially in kernel space where data structures and memory allocations are fun.
Ironically, this is much _faster_ for small sets. Sometimes the error is intentional, because the programmer believes that all inputs will be small. IME, those programmers were wrong, but that's the inverse of survival bias.
Even now, the contrast between repository sizes is wide. Most repos contain 1000s of references, which while not the best to run O(N^2) algorithm, is still okay. But as a Git forge, you also see a share of repositories which contain millions of references.
In this case Git already had a string set, but it's still not standard so there's a good chance the original author just didn't know about it.