GoDaddy just announced a “Trusted Identity Naming System for AI Agents.”
At first, the original blog post from GoDaddy sounds good. The promise of an open system is catchy: “New agnostic framework allows anyone to easily find, verify and trust AI agents.” A way to give artificial intelligences unique names, “build confidence,” and let humans know which agents to trust.
But it may quietly reintroduce the oldest form of digital control: deciding who gets to exist online. In practice, it reads like the oldest trick on the internet — turning trust into a service.
A Familiar Pattern
Every decade or so, someone rediscovers that there’s money in “managing trust.” In the 2000s it was Extended Validation certificates. Then came the blue-tick era of “verified” users. Now it’s the AI agent namespace — a new market for digital legitimacy.
GoDaddy isn’t proposing a decentralized identity system; it’s proposing a central ledger of permission. No standards body, no RFC, no hint of open governance. Just a corporate database that decides which AI gets to be called “trusted.”
If you can name it, you can price it. If you can price it, you can control it.
A few weeks ago, I wrote about Cloudflare’s Annual Founders’ Letter, where the company proposed a very different future: one where content and creators earn credibility through transparency and attribution, not certification.
Cloudflare argued that the web’s infrastructure should remain neutral — that the problem isn’t who’s allowed to speak, but how we measure and reward honest contribution. Cloudflare wants an open web of provenance; GoDaddy seems to prefer a registered one.
GoDaddy seems to have “missed” that memo. Its new proposal feels less like a protocol and more like a registry — a cosmetic rebranding of the same old authority model.
Two Philosophies
Aspect | Cloudflare | GoDaddy |
---|---|---|
Trust basis | Provenance and behavior | Authority and registration |
Governance | Open ecosystem | Proprietary namespace |
Incentive model | Merit-based recognition | Pay-to-participate legitimacy |
Risk | Fragmented signals | Centralized gatekeeping |
When “Trusted” Means “Approved”
If systems like this gain traction, the web will quietly fracture again. AI outputs from “unregistered” agents will be filtered, demoted, or simply ignored. Platforms will claim it’s about safety, regulators will nod approvingly, and a few large registrars will quietly own the authentication layer of machine communication.
As “AI ingestion” replaces search engine crawling, creators will fight to catch the eyes of AI just as they once fought to rank first on Google — only this time, it will come at a price.
An Open Alternative
It doesn’t have to go that way. An AI’s identity could be verifiable through open mechanisms: decentralized identifiers (DIDs), DNSSEC, cryptographic provenance. Anyone could issue or verify trust claims, and the system would evolve through use, not decree.
That’s how the internet used to work — before trust became another product line.
GoDaddy wants to name the machines. Cloudflare wants to prove who they are.
Both say they’re protecting the web. Only one still remembers what it’s made of.
The question isn’t whether AI will have names — it’s who gets to write the phonebook.
Commenti
Posta un commento