Independent analyst Lyn Alden: Why is BTC the most suitable currency for AI payments?
Original Title: 《Six AI Themes to Consider》
Author: Lyn Alden
Compiled by: GaryMa, Wu Says Blockchain
Note: The following excerpts are from Lyn Alden's August newsletter on AI themes, with parts related to Bitcoin potentially having some details or information omitted or deleted due to length constraints or other reasons. We recommend that readers refer to the original text for a more comprehensive understanding.
What is the Best Currency for AI?
What form of currency is best suited for AI agents? From what I can see, I believe Bitcoin is the primary answer.
In a future world, payments between humans and machines will occur extensively, and open-source, digital-native currencies will naturally apply.
AI agents can create a Bitcoin/cryptocurrency wallet, earn funds, and use those funds.
AI can start from scratch, be assigned a task, or simply "decide" to complete a task. As it evaluates the best way to accomplish the task, it will determine that certain paid services are beneficial or essential for it. Thus, it generates a Bitcoin/Lightning Network wallet (which, unlike a bank account, does not require permission because it is open-source), and then persuades its creator to provide it with funds or otherwise earns money online (perhaps it posts some clever content on Nostr and earns tips). Now, with funds, it can pay for what it needs.
Efficient open-source digital-native currencies can be used more quickly, without permission, and can facilitate smaller transactions (microtransactions) than existing fiat currencies, thereby creating more possibilities for machine-to-machine payments and human-to-machine payments. This is a natural form of currency for AI agents to use.
This trend continues. At the end of June, Bitcoin companies Fedi and Stak organized an AI hackathon. Fedi and Stak called on all developers, builders, and pioneers who believe in empowering people and communities through Bitcoin, the Lightning Network, and Fedimint: join us in helping to democratize AI with Bitcoin, the Lightning Network, and Fedimint.
At the beginning of July, Lightning Labs, a major provider of Lightning Network infrastructure, announced a new set of developer tools to support this emerging Bitcoin + AI intersection.
Simply put, Lightning Labs' tools mainly have two aspects:
1) Making it easier to build AI agents that can hold and use Bitcoin to complete tasks;
2) Using the Lightning Network and the long-standing "HTTP 402 Payment Required" status code, it becomes easier to pay for resource-intensive computations on a per-use basis.
To see the effect of the latter, you can watch this two-minute video shared by Lightning Labs CTO Olaoluwa Osuntokun.
Link:
https://twitter.com/roasbeef/status/1677037020156399616
In the video, the user (Kody Low, developer and product support lead at Fedi) asks GPT-4 to write code for him and pays for each question using Lightning payments (about $0.01 per question). Ironically, the code he asked the AI to write was exactly what he had already demonstrated as a work service: the ability to pay for API calls using the Lightning Network.
Specifically, Kody asked the AI: "How do I create a reverse proxy to pay for my API calls using the Lightning Network? Use an LND backend and write the reverse proxy for me in golang." He then paid the Lightning Network payment request. The AI understood the request, received the payment, and provided well-organized and commented code in response. Kody then asked it to rewrite the code in Rust instead of golang and paid again. The AI rewrote the code and explained in more detail in English what certain parts of the code were doing, clarifying the basic outline of the program, but noted that to achieve a fully implemented version, it would need to include more error handling and other nuances, but this was the basic outline of the program. Kody thanked it and noted that AI and the Lightning Network were a great match, paying the fee again. The AI responded that Kody was welcome, agreeing that both AI and the Lightning Network are powerful technologies that can create advanced solutions together.
I personally tested one of these programs, and I can use ChatGPT on my phone without a subscription, paying a few cents with the Lightning Network to ask questions, with prices dynamically fluctuating based on some supply-demand algorithm running in the background.
For fun, I asked ChatGPT for some examples of payments an AI agent might need to make to humans. Here are its responses:
Great question! AI agents may need to pay humans while completing various tasks. Here are some potential scenarios:
1. Gig economy and microtasks: AI may be asked to find people to complete small tasks, such as tagging images or transcribing audio, which often occurs on gig platforms. Once the task is completed, the AI can send the agreed payment to the worker.
2. E-commerce transactions: If the AI manages an e-commerce platform or online marketplace, it may need to send payments to sellers or service providers. This could be in response to product sales, refunds, or service fees.
3. Content creators and online influencers: AI agents may manage platforms or channels where people post content or influence others. They can be programmed to distribute payments or rewards to these creators based on views, likes, shares, or other metrics.
4. Crowdsourcing and creative generation: Companies can use AI to manage innovation or creative generation activities, encouraging people to submit ideas. Once an idea is selected, the AI can send payments or rewards to the person who submitted the idea.
5. Customer refunds: If the AI handles customer service for a company, it may need to issue refunds for returned products, canceled services, or any inconvenience caused by the company.
6. Participation in surveys: AI may manage surveys for market research or other data collection purposes. Participants can be compensated for their time and input.
7. Charitable donations: AI can be used to manage payments from charities to beneficiaries.
8. Freelancer payments: AI may be hired to manage freelancers or contractors, paying them upon completion of work.
9. Bonuses and rewards: If the AI manages competitions or games, it may need to send bonuses or rewards to winners.
These are just a few examples of many potential applications. The common thread is that any task requiring compensation for providing services or value has the potential to be managed by AI.
Then I asked it for some examples of payments an AI agent might need to make to another AI agent while completing its assigned tasks, and its response was:
Indeed, there are many potential scenarios where an AI agent may need to transact with another AI agent. This could occur in systems designed for autonomous interaction and trading of resources or services, such as:
1. Autonomous vehicles: In the future of self-driving cars, AI agents controlling various vehicles may pay other AI-controlled vehicles for rights or to secure parking spaces.
2. AI marketplace: In an AI marketplace, one AI may pay another AI for specific service fees, such as data analysis, machine learning model training, or optimizing complex tasks.
3. Content or data acquisition: One AI may pay another AI for proprietary data or content, such as proprietary research, market analysis, or digital assets.
4. Cloud computing resources: AI managing servers or cloud resources may pay another AI for using its excess computing power or storage space.
5. Digital advertising: AI managing advertising campaigns may pay another AI for placing its ads in optimal positions or times on digital platforms.
6. AI-mediated auctions: In online advertising real-time bidding systems, one AI agent may pay another AI agent for ad space targeting specific audience segments.
7. Gaming: In online gaming scenarios, one AI may need to pay another AI for virtual goods, services, or game progress.
8. Internet of Things (IoT): In smart homes or smart cities, an IoT device (managed by AI) may pay another device to prioritize its tasks. For example, a smart refrigerator may pay a smart oven to delay its energy-consuming cooling cycle until the refrigerator has completed its energy-consuming cooling cycle.
These examples are speculative and depend on the complex infrastructure that allows safe AI-to-AI transactions. However, as technology continues to evolve, we may see more systems designed to allow for this kind of autonomous interaction.
Now, a valid skeptical question that readers of this article might ask at this point is, "Why use Lightning Network payments in AI applications instead of just paying in dollars?"
The answer is that Bitcoin/Lightning Network is uniquely suited to work with AI for several reasons:
● Non-refundable: Unlike many software applications, AI is very resource-intensive on hardware on a per-use basis. When you use AI, you are utilizing very power-hungry GPU hardware. Once the development is complete, the software itself is cheap, but each use of that software incurs actual electricity/hardware resource costs, which is different from most types of software. If an API is left to be paid for through conventional means (like credit cards), then merchants become targets for fraud. Users can send thousands of uses to the GPU and reverse refund their payments. In this case, the merchant bears the cost of electricity/hardware but then loses the payment. On the other hand, Lightning provides a way for final settlement of anonymous assets.
● Microtransactions: Most payment methods have limitations on the size of individual transactions. You cannot use a Visa card to pay $0.03, for example. PayPal has a minimum fixed fee of $0.49 for merchants. The closest you can get is to pre-fund an account and then pay micro-payments each time you use it within that account. This becomes more complicated globally and across borders and currencies; if someone in Malaysia wants to pay someone in California for some AI usage, what payment method should they use? Even within the subscription payment backend on my website, my operations manager and I sometimes encounter issues accepting foreign credit cards, as many countries use mobile payment services instead of credit cards. The Lightning Network can handle microtransactions globally. Users can pay a few cents for each use from anywhere to anywhere without subscriptions or pre-authorized funds. This is highly relevant for machine-to-machine payments, where subscriptions may not be ideal; they just need to interact with each other in real-time.
● Permissionless and Programmable: AI agents can be created to perform various operations. As entities without legal identity, AI agents find it easier to hold and use open-source programmable currency than to use bank accounts or banking payment products. AI agents may not be able to obtain bank accounts, but they can run Lightning nodes or operate Lightning Network wallets. The programmable nature and real-time instant nature of the Lightning Network allow for dynamic pricing. In other words, the price for each API call can be programmed to change every second, depending on the number of incoming API requests they receive simultaneously.
Proof of Work to Prevent Spam
Today, as artificial intelligence lowers the cost of creating images, videos, text, programs, etc., it becomes increasingly difficult to distinguish which content is real and which is high-quality fake videos or virtual personas. When we see an image or video from a business, social, political, or geopolitical perspective, how can we be sure it is a genuine shot rather than a digitally fabricated product generated by AI? Now imagine how this will become in five or ten years as technology advances.
Even more concerning is the convincing nature of bots. Whenever I use Twitter (now called "X"), my posts are inundated with bots. Fortunately, they are low-quality, poorly programmed obvious bots. However, there are also some human impersonators who manually create accounts similar to mine and start trying to fish people for scams. What if AI bots on social media were genuinely convincing? What if they could exist in countless numbers, continuously being created, just as they can be identified and blocked?
In addition to sending spam to people as a form of fraud, a sophisticated bot strategy might involve building a complete fake community to promote a certain ideology and make it appear popular. Many different AI personas might be able to operate online and persuade many people. Imagine someone writing a program that creates a bot army, each bot being unique and years more advanced than the current ChatGPT, coordinating to promote a political candidate, a social viewpoint, or simply to increase the audience of an influential person. Now imagine this program is open-source, available for anyone to use. Pandora's box has been opened.
Imagine interacting online where:
1) Most entities that appear human are not human;
2) It is difficult to determine which are real humans.
One way to address this situation is for social media platforms to enforce identity verification upon registration. In this case, newcomers would need to provide sufficient evidence that they are real people. Elon Musk's Twitter/X seems to be moving in this direction, as do some other platforms. While this may indeed work for certain platforms, the obvious drawback of this approach is that, in many cases, anonymity is an important component of safety and authenticity. Certain professionals cannot freely be themselves online and use anonymous accounts for social relaxation. More seriously, in many countries, certain types of speech are illegal. Leaders cannot be openly criticized, certain topics are banned from discussion, and so on. In fact, some of America's founding fathers initially used pseudonyms to gain protection when publishing revolutionary critiques of the British monarchy. Authoritarians love mandatory identity verification to participate in social media, and serious bot problems provide them with a great excuse to implement such measures. Moreover, they can easily force various platforms to provide them with data.
Another approach is for social media to remain open and anonymous but to use some form of proof of persistence or payment verification to prevent the flood of countless bots. As early as the 1990s, cryptographer Adam Back invented Hashcash as a form of proof of work to prevent various types of spam or denial-of-service attacks. Ultimately, Satoshi Nakamoto used it in the core part of the Bitcoin protocol, but it has not been widely implemented in other parts of social media or the internet.
The Bitcoin ledger has a historical record that is nearly impossible to forge. Each time a block is updated on the blockchain, energy is required, and the complete historical record of the ledger contains a vast amount of energy encoding, making it difficult to forge. In addition to being used as a monetary asset, the Bitcoin blockchain now exists as a global, open-source, immutable time capsule, where arbitrary data (including documents, images, books, songs, or timestamps from other systems) can be inserted and can be proven to have not been altered since that date. This functionality cannot be replicated by closed-source, opaque, centralized servers.
Can Bitcoin's proof of work be used to protect other types of systems from spam, just as Adam Back originally created Hashcash? There are some avenues that have already explored these directions, which may become increasingly important in the coming years.
● Payments: Bitcoin is currently used by Stacker News as a form of posting incentive and spam prevention. Users must pay a small portion of Bitcoin to post, and they can tip each other with small portions of Bitcoin. The problem with traditional payment methods (like Visa) is that they identify people, while Bitcoin is proof of payment without needing to know identities, which is acceptable for preventing spam. This can be adjusted in various ways. If Twitter/X implemented a similar feature, for example, I could set it so that only those with some micro-cost association with their identity could comment on my posts to prevent spam.
● Identity with Cost: Someone could create a public/private key pair (as in Nostr), but then further embed the public key into the Bitcoin blockchain, incurring real-world resource costs due to transaction fees, verifying in various ways that this key indeed belongs to them, and then using the matching private key to sign any digital content they create or authorize. This would also apply to any large news organization with known integrity standards. Platforms could check which key pairs are embedded or not embedded. From that point, this allows people to distinguish forgeries from real content (at least to some extent created by that individual or organization and not modified since publication, not necessarily objectively true) and makes it so that millions of impersonators cannot use similar key pairs that have already been embedded in the Bitcoin blockchain and incurred real-world costs.
While we have not yet reached this stage, in a world where AI is advanced enough to imitate or forge at almost no cost, we may learn to require proof of persistence (cryptographic public/private key pairs) and proof of work (proof that real-world resources have been used to enhance the significance of a given public/private key pair or online identity) before taking certain types of digital content seriously.
In the potential future world I described earlier:
1) Most entities that appear human are not human;
2) It is difficult to determine which are real humans, and we need to adopt various methods to classify which entities have some tangible reality or associated costs, finding signals in an ocean of nearly infinite noise.
The use of identity verification or actual resource costs is two main directions that platforms can take.







