That is what it previously looked like from my vantage point in Germany, but it isn’t true: Like Quad9, and unlike Google DNS, 1.1.1.1 does not support EDNS ECS for privacy reasons.
From my vantage point in Singapore, I had 1.1.1.1 return pool server addresses belonging to the Singapore and Hong Kong zones in response to queries for the global IPv4 zone. That matches with where 1.1.1.1’s backend server IPv4 addresses of anycast server instances reachable from my vantage point seem located based on a recent GeoLite database version:
11 HK, Hong Kong
22 SG, Singapore
Quad9 additionally returned pool servers belonging to the Japan zone, which matches the GeoLite-based apparent country locations of Quad9 backend server IPv4 addresses:
2 DE, Germany
2 HK, Hong Kong
3 JP, Japan
8 SG, Singapore
I currently believe the two backend server IP addresses being located in Germany are an artefact of my test setup.
I agree as well. At the same time, I also agree that there probably currently isn’t much the pool can do about that.
One thing that has been tried, and found not to bring about the desired effect, was to reduce the DNS TTL in the hope that bursts lasting longer than the TTL would be spread across more servers at least.
Returning more IP addresses would indeed be another potential lever. But I also concur that just as with the TTL values, there are tradeoffs that would likely cause, but also require, a complex discussion.
And as the overall system is so complex, with many parts outside the control of the pool, it would be hard to predict the result of any changes, or when simply trying them out, to properly assess the actual effect they have.
It is looking at another aspect than what the thread was originally intended for. In my view, there are two (often interrelated/interdependent) sides: Experience of clients using the pool, and experience of server operators providing their resources to the pool. (A third view would be that of the pool infrastructure.)
The thread was originally started with a focus on the latter (server operator experience). The topic you brought up focused more on the former (“Some clients might get a reply, but for a number of them rate limiting might kick in.” I.e., clients’ experience.), but is also relevant to server operators (“rate limiting” as an attempt of servers at trying to reduce the load they experience, regardless of whether it actually works or not or may even be counter-productive, or whether it is needed or not).