Trust-Based vs. Evidence-Based Geolocation

Followup on: What's up with BGP communities? - NANOG - lists.nanog.org

IP geolocation has historically been trust-based. Operators publish geofeeds declaring where their IP ranges are located, and IP geolocation providers aggregate and sell that data.

The problem: there’s no verification layer. Geofeeds can be stale, incomplete, or intentionally false. When a customer asks “why is this IP showing as located in Germany,” a trust-based provider can only say “because someone said so.”

This came up recently in a NANOG thread where operators debated geofeed reliability. Christopher Hawker argued:

If I tell you (via my Geofeed) my address space is being used in Sydney Australia, you should be presenting it as being used in Sydney Australia. Not what you think is accurate or correct.

It’s a fair point from an operator’s perspective. They know their network. But Ryan Hamel raised the counterpoint: “When it comes to VPN providers or those who wish to cause trouble on the Internet, how does a IP geolocation provider prevent bogus data from getting accepted?”

Ben Cartwright-Cox, who runs bgp.tools, put it more bluntly: “A very much non-zero amount of geofeeds from providers opportunistically putting their prefixes in countries they are not, mostly for VPN placement… but there is also a number of networks that are using geofeeds as an opportunistic way to get their prefixes into countries where copyright holders don’t necessarily have a way of sending notices to.”

This is the core tension. Operators want their declarations trusted as authoritative. But at scale, with 70,000+ ASNs, diligent geofeed maintenance is the exception rather than the norm. Large telecoms often don’t publish geofeeds at all. And some actors actively lie.

Evidence-based geolocation flips this model.

Instead of accepting declarations at face value, you measure. RTT data, traceroutes, network topology analysis—each provides verifiable evidence of where traffic actually originates.

At IPinfo, we built a network of 1,300+ probe servers across 550+ cities in 152 countries specifically for this purpose. When we say an IP is in Richmond, VA, we can show the measurement data that supports it. If a customer challenges our data, we can point to evidence rather than saying “an operator told us so.”

This doesn’t mean geofeeds are useless. They’re valuable—especially from operators who maintain them diligently. But they serve as one input among many, weighted by how well they align with observable evidence.

Gary Sparkes raised a valid scenario: “Let’s say your measuring endpoint is in Baltimore. Let’s say I’m announcing out of Raleigh for whatever reason. Let’s say the end users of that /24 are all in Baltimore, and that range is only used for Baltimore people. Are you going to pin me in NC (incorrect) or Baltimore (correct, as my geofeed publishes)?”

This is exactly why our system has fallback mechanisms. When active measurement data is noisy or inconclusive—like when network architecture creates misleading RTT patterns—we use geofeed data. The hierarchy isn’t about dismissing operator input. It’s about prioritizing verifiable evidence when it’s available and falling back to declared data when it’s not.

Job Snijders noted another problem with the geofeed ecosystem: the RPKI-based authentication scheme designed to verify geofeeds has failed to gain adoption. “I’ve been unable to find any other people willing to implement & support the scheme… So, as it stands, Geofeed information generally is published & consumed with weak controls on semantic correctness, integrity & authenticity.”

Without authentication, geofeed remains a trust-based system in a world where not everyone can be trusted.

The distinction matters because different use cases have different tolerance for error.

A CDN optimizing for latency can afford some inaccuracy. A streaming service enforcing content licensing cannot. Cybersecurity teams operating on “zero trust” principles need data they can verify, not data they have to take on faith.

Warren Kumari summarized how we got here: “IP geolocation’s original goals were to get users to the ‘closest’ datacenter to minimize latency… It has been expanded (co-opted?!) to also get people to a close pizza parlor, and now also is being used (abused?) to implement content restrictions. These were not part of the original design, and so it’s not surprising if they don’t work well for that.”

He’s right. IP geolocation is being used for purposes it wasn’t designed for. That’s precisely why the methodology has to evolve. Trust-based systems that worked for latency optimization don’t hold up when the stakes include content licensing, fraud prevention, and regulatory compliance.

Trust-based systems work when everyone participates honestly. Evidence-based systems work regardless.

We’re not asking operators to stop publishing geofeeds. We’re asking them to understand why verification matters—and to work with us when our measurements and their declarations don’t align. The goal isn’t to override operators. It’s to build a system where accuracy can be demonstrated, not just asserted.


Want guaranteed accuracy for your prefixes? Help us improve IP geolocation accuracy and host a ProbeNet server: Host a IPinfo Probe Server