Would like to see this benchmarked on different system configurations. That being said it's pretty embarrassing for the devs that coded this initially.
I imagine that the original dev just wrote a function that reads a file and checks for duplicates and expected it to be run once. Plus there were, for example, just 100 entries in the file.
Then someone else probably added an easy way to get a single value over that. Then someone else had some use for going over all the values and decided to use that function, a black box.
That all went unnoticeable because there were just 100 values in the file. Even at 1000, it was hardly an issue, taking a fraction of a second. Years later, it's 63,000, and that bit of code is a major bottleneck.
The problem isn't directly with either the function itself or its use, but the combination of them. Programmers all use functions without going into their code, and people often write functions inefficiently when they don't expect them to be a bottleneck. That's something that's typically found later, when profiling for performance bottlenecks.
That, to me, is the embarrassing part, that nobody has seen fit to look into the long loading times and fix them, not that someone wrote suboptimal code to begin with.