The union–find decoder is a leading algorithmic approach to the correction of quantum errors on the surface code, achieving code thresholds comparable to minimum-weight perfect matching (MWPM) with amortised computational time scaling near-linearly in the number of physical qubits. This complexity is achieved via optimisations provided by the disjoint-set data structure. We demonstrate, however, that the behaviour of the decoder at scale underutilises this data structure for twofold analytic and algorithmic reasons, and that improvements and simplifications can be made to architectural designs to reduce resource overhead in practice. To reinforce this, we model the behaviour of erasure clusters formed by the decoder and show that there does not exist a percolation threshold within the data structure for any mode of operation. This yields a linear-time worst-case complexity for the decoder at scale, even with a naive implementation omitting popular optimisations.