Main menu

Pages

Robert Sams on the future of crypto (from my email)

In response to this last crypto post of mine:

I’m glad you’ve put this one out there, as it’s a thesis i’ve been thinking about for many years and do not think it’s exotic at all. For all the hand-wavy hypothesising about the future of AI autonomous agents, precious little attention is given to the role of legal personhood in the discussion. The very concept of “autonomous agent” is ambiguous in this regard. In one sense, it means some autonomously operated system that is acting _on behalf_ of someone|something else; in another sense, it means something that acts on its own behalf, it “has agency”. The distinction is critical, because it’s hard to see how an AI system can have agency if it cannot, on its own, own property, enter into contracts, sue and be sued. Having agency is more than being intelligent. It’s pretty hard to imagine a scenario where jurisdictions start granting legal personhood to AI systems. There may be legal entities where human directors delegate corporate decision making to an AI, but there’s always an essential human-fiduciary-in-the-loop with legal personhood and is the nexus of AI regulation. But blockchains upend this framework, offering an alternative infrastructure where a different model of ownership, contract and dispute resolution where a human fiduciary role is not an essential requirement. AI’s can be first-class citizens in the crypto economy. So having agency is more than being intelligent, you need “economic personhood” to autonomously interact in the real world, and blockchains provide the infrastructure for non-human, economic persons. If an AI can buy its own GPU compute and other resources, and fund its opex by selling services people (or other AIs) value, the AI has economic personhood. Crucially, these non-human economic persons do not need _general_ intelligence, they just need domain-specific capabilities that enable it to produce valuable output and continuously adapt to a competitive marketplace. That is why the idea is not very exotic at all, as the current capabilities of LLMs and blockchains are arguably sufficient for this scenario to materialise in the near term. The obstacles seem to be more tractable problems, like: “how can the AI agent learn to trust the veracity of data it solicits and quality of services (esp GPU compute) it buys?”. Whilst it sounds kind of funny, there’s an opportunity for human operated service providers to build brands of trustworthiness with AI agents by doing things that are easy for context-aware humans but hard for AIs, like attesting to the veracity of a data feed (“is it really 41c in lower Manhattan today?”, “did USDJPY really rally 10% on the day?”). AI-Human trust games may turn out to be more effective than centralised human feedback loops operated by big AI tech, esp if the AI’s are domain-specific and must strive for product-market fit to survive. And whilst AGI doomers will be predictability horrified by the prospect of AI’s with economic personhood, my own contrary view is that our entire orientation to the subject will change if we see just how vulnerable to attack these AI’s are once you cut the umbilical chord they currently have by being ensconced inside the trusted environments funded by big tech’s enormous balance sheets. Finally, I suspect that an economic personhood orientation to the AI x-risk debate will improve the research and dialogue significantly. My own speculation is that we’ll eventually come to the conclusion that the telos of AGI is not a singularity but a plurality of competing A[G]Is. It seems more fruitful to ponder the respective comparative advantages of AIs vs humans in the domains of computational power and context awareness and explore the codependencies when these two classes of intelligent economic agent must compete and cooperate in a decentralised market.

The post Robert Sams on the future of crypto (from my email) appeared first on Marginal REVOLUTION.



Comments