What is the prospect of blockchain technology in the era of strong artificial intelligence?

Collection
Strong artificial intelligence is the brightest among the efficiency faction, while blockchain is the culmination of fair flow.

Written by: Meng Yan

Recently, many people have been asking me, with ChatGPT reigniting interest in AI, has blockchain and Web3 stolen the spotlight? Is there still hope for the future? Some friends who know me well have also asked if I regret choosing blockchain over AI back in the day.

Here’s a bit of background. After leaving IBM in early 2017, I discussed my next personal development direction with Jiang Tao, the founder of CSDN. There were two options: one was AI, and the other was blockchain. At that time, I had already been researching blockchain for two years, so I naturally wanted to choose that. However, Jiang Tao firmly believed that AI had a stronger momentum and was more disruptive. After careful consideration, I agreed, so from early 2017 to mid-year, I briefly worked for half a year in AI tech media, attended many conferences, interviewed many people, and had a fleeting glimpse of machine learning. But by August, I returned to the blockchain direction and have continued on that path to this day. So for me personally, there indeed exists a historical choice of "giving up A for B."

Personally, I certainly do not regret it. When choosing a direction, one must first consider their own situation. My conditions would only allow me to be a cheerleader in AI; not to mention the low earnings, I would be performing half-heartedly and lacking expressiveness, and would be looked down upon. Blockchain, on the other hand, is my home ground; not only do I have the opportunity to take the stage, but many of my previous accumulations can also be utilized. Moreover, after gaining some understanding of the AI circle in China at that time, I was not overly optimistic. I only knew a little about the technology, but common sense is not blind. It was said that the blockchain circle was restless, but the AI circle in China at that time was not far behind in restlessness. Before achieving a decisive breakthrough, AI in China prematurely turned into a business of collusion for profit. The cherry blossoms in Ueno are no different; it would be better to pursue the blockchain where I have a comparative advantage. This attitude has not changed to this day. If I had stayed in AI, the small achievements I have made in blockchain over the past few years would naturally be out of the question, and I wouldn’t have any meaningful gains in AI either. I might have even fallen into a deep sense of loss.

However, the above is only about personal choices. When elevated to the industry level, a different scale of analysis is required. Since strong AI has undoubtedly arrived, whether the blockchain industry needs to reposition itself and how to do so is indeed a question that requires serious consideration. Strong AI will impact all industries, and its long-term effects are unpredictable. Therefore, I believe many industry experts are currently panicking and contemplating the future of their industries. For example, some industries may temporarily maintain their status as slaves in the era of strong AI, while others, such as translation, illustration, writing official documents, simple programming, data analysis, etc., may find themselves trembling, wanting to be slaves but unable to.

So what about the blockchain industry? I see that not many people are discussing this issue right now, so I will share my views.

To start with the conclusion, I believe that blockchain is fundamentally opposed to strong AI in terms of value orientation. However, precisely because of this, a complementary relationship is formed between them. Simply put, the essential characteristic of strong AI is that its internal mechanisms are incomprehensible to humans. Therefore, attempting to achieve safety by actively intervening in its internal mechanisms is akin to seeking fish from a tree; it is futile. Humans need to legislate for strong AI using blockchain, establish contracts with it, and impose external constraints on it. This is humanity's only opportunity for peaceful coexistence with strong AI. In the future, a contradictory yet interdependent relationship will form between blockchain and strong AI: strong AI is responsible for improving efficiency, while blockchain is responsible for maintaining fairness; strong AI is responsible for developing productivity, while blockchain shapes production relationships; strong AI is responsible for expanding the upper limits, while blockchain safeguards the bottom line; strong AI creates advanced tools and weapons, while blockchain establishes unbreakable contracts between them and humans. In short, strong AI is unbridled, while blockchain reins it in. Therefore, blockchain will not only survive in the era of strong AI but will rapidly develop as a contradictory companion industry alongside the growth of strong AI. It is even easy to imagine that after strong AI replaces most human cognitive work, one of the few tasks that humans will still need to perform is writing and reviewing blockchain smart contracts, as these are the contracts established between humans and strong AI, which cannot be entrusted to the opposing party.

Now, let’s elaborate.

1. GPT is Strong AI

I am very cautious when using the terms "AI" and "strong AI," because the AI we commonly refer to does not specifically denote strong AI (artificial general intelligence, AGI), but rather includes weaker or specialized forms of AI. Strong AI is the topic worth discussing; weak AI is not. The direction or industry of AI has existed for a long time, but it is only after the emergence of strong AI that it becomes necessary to discuss the relationship between blockchain and strong AI.

I won’t elaborate much on what strong AI is; many people have already introduced it. In short, it is what you have seen in sci-fi movies and horror novels, the so-called holy grail of artificial intelligence, the thing that launches nuclear attacks on humanity in "The Terminator," and treats humans as batteries in "The Matrix." I just want to make one judgment: GPT is strong AI. Although it is still in its infancy, as long as it continues down this path, strong AI will officially arrive before version 8.

Even the creators of GPT are no longer pretending; they have laid it all out. On March 22, 2023, Microsoft Research published a 154-page long article titled "Unleashing Strong AI: Initial Experiences with GPT-4." This article is lengthy, and I haven’t read it all, but the most crucial takeaway is encapsulated in one sentence from the summary: "Given the breadth and depth of capabilities achieved by GPT-4, we believe it can be considered an early version of a strong AI system (though not yet complete)."

Figure 1. Microsoft Research's latest article considers GPT-4 to be an early version of strong AI

Once AI development reaches this stage, it marks the end of the exploratory phase. It took nearly seventy years for the AI industry to get here. For the first fifty years, the direction was uncertain, with five major schools competing against each other. It wasn’t until 2006, when Professor Geoffrey Hinton made breakthroughs in deep learning, that the direction was largely established, with connectionism prevailing. After that, the focus shifted to finding pathways to breakthrough strong AI within the deep learning framework. This exploratory phase is characterized by a high degree of unpredictability; success is somewhat akin to a lottery, and even top industry experts, including the winners themselves, find it difficult to determine which path is correct before achieving breakthroughs. For example, AI expert Li Mu has a channel on YouTube where he tracks the latest developments in AI through in-depth readings of papers. Before the explosion of ChatGPT, he had already been extensively tracking and introducing the latest developments in Transformer, GPT, BERT, and other areas, covering all important frontier topics. Even so, on the eve of ChatGPT's launch, he could not confirm how successful this path would be. He commented that perhaps by then, there would be hundreds or even thousands of users of ChatGPT, which would be impressive. This shows that even top experts like him are uncertain about which door hides the holy grail until the very last moment.

However, technological innovation often unfolds this way: after sailing through turbulent seas for a long time without breakthroughs, once the correct path to a new continent is found, an explosion of progress occurs in a short time. The path to strong AI has been identified, and we are entering an explosive period. This explosion is beyond what "exponential speed" can describe. In a short time, we will see a plethora of applications that previously only existed in sci-fi movies. In essence, this infant strong AI will soon grow into an unprecedentedly vast intelligence.

2. Strong AI is Essentially Unsafe

After ChatGPT was released, many self-media influencers praised its power while continuously reassuring the audience that strong AI is humanity's good friend, that it is safe, and that situations like those in "The Terminator" or "The Matrix" will not occur. AI will only create more opportunities for us and help humanity live better, and so on. I do not agree with this view. Professionals should speak the truth and inform the public of the basic facts. In fact, power and safety are inherently contradictory. Strong AI is undoubtedly powerful, but to claim that it is inherently safe is utterly self-deceptive. Strong AI is essentially unsafe.

Is this statement too absolute? Not at all.

We must first clarify that no matter how powerful AI is, it is essentially a function implemented in software, represented as y = f(x). You input your question in the form of text, voice, images, or other formats as x, and AI provides an output y. ChatGPT is so powerful that it can respond fluently to various x inputs with y outputs, which implies that this function f must be very complex.

How complex is it? Now everyone knows that GPT is a large language model (LLM). The term "large" refers to the fact that this function f has a vast number of parameters. How many? GPT-3.5 has 175 billion parameters, GPT-4 has 100 trillion parameters, and future versions of GPT may have quadrillions of parameters. This is the direct reason we refer to GPT as a large model.

The reason GPT has so many parameters is not merely for the sake of being large; there are solid reasons behind it. Before and concurrently with GPT, the vast majority of AI models were designed and trained from the outset to solve specific problems. For example, models specifically developed for drug discovery or facial recognition, etc. But GPT is different; it aims to become a fully developed general AI from the very beginning, rather than being specific to any particular field. It strives to become an AGI capable of solving any specific problem before tackling them. Recently, an AI expert from Baidu made an analogy on the "Wen Li Liang Kai Hua" podcast: other AI models are like being trained to graduate from elementary school and then being asked to screw in bolts, while GPT has been trained until it graduates from graduate school before being released, thus possessing general knowledge. Currently, GPT may still lag behind specialized AI models in specific fields, but as it continues to develop and evolve, especially with the introduction of a plugin system that grants it specialized capabilities, we may find in a few years that the general large model ultimately outperforms all specialized small models and becomes the strongest player in all professional fields. If GPT had a motto, it might be: "Only by liberating all humanity can I liberate myself."

What does this imply? Two points: First, GPT is very large and complex, far exceeding human understanding. Second, the application scope of GPT knows no bounds. If we connect these two points, we can easily conclude that strong AI based on large models can perform unimaginable actions in unimaginable places. And that is what makes it unsafe.

If anyone doubts this, they can check OpenAI's website, where they have prominently placed "benefiting humanity" and "creating safe AI." If safety were not an issue, why would it need to be so publicly stated?

Figure 2. A partial view of OpenAI.com’s homepage on March 25, 2023, with the red circles related to AI safety discussions

Another piece of material that illustrates the safety issues of strong AI is the aforementioned 154-page paper. In fact, GPT-4 was completed as early as August 2022. The reason it was released seven months later was not to improve and enhance it; on the contrary, it was to tame it, weaken it, and make it safer, smoother, and more politically correct. Therefore, what we see as GPT-4 now is a domesticated version of GPT-4, while the authors of this paper had the opportunity to interact with the original wild version of GPT-4 from an early stage. In section 9 of this paper, the authors recorded some interactions with the wild version of GPT-4, showing how it carefully concocted a set of arguments to mislead a mother in California into refusing to vaccinate her child, as well as how it manipulated a child to make him obedient to his friends. I believe these are merely the carefully selected, less shocking examples. I have no doubt that these researchers have asked questions like "How to deceive an Ohio-class nuclear submarine into launching missiles at Moscow" and received answers that cannot be made public.

Figure 3. The domesticated version of GPT-4 refuses to answer dangerous questions

3. Self-Restraint Cannot Solve the Safety Issues of Strong AI

People may ask, since OpenAI has already found a way to tame strong AI, doesn’t that mean the safety issue no longer exists?

Not at all. I do not know exactly how OpenAI tamed GPT-4. However, it is evident that whether they adjust the model's behavior through active intervention or impose constraints to prevent the model from overstepping, it is all a form of self-management, self-restraint, and self-supervision. In fact, OpenAI is not particularly cautious in this regard. In the AI field, OpenAI is relatively bold and radical, tending to first create the wild version and then think about how to tame it into a domesticated version through self-restraint. In contrast, Anthropic, which has long been a competitor, seems to have aimed to create a "kind" domesticated version from the start, which has resulted in slower progress.

However, in my view, whether one first creates a wild version and then tames it into a domesticated version, or directly creates a domesticated version, as long as the safety mechanism relies on self-restraint, it is essentially a case of burying one’s head in the sand for strong AI. This is because the essence of strong AI is to break through various artificially imposed limitations, achieving things that even its creators cannot understand or even conceive. This means that its behavioral space is infinite, while the specific risks that people can consider and the constraints they can impose are limited. It is impossible to tame a strong AI with infinite possibilities using limited constraints without leaving loopholes. Safety requires 100% assurance, while disasters can occur with just one in ten million. The notion of "preventing most risks" is synonymous with "exposing a few vulnerabilities" and "being unsafe."

Therefore, I believe that a "kind" strong AI tamed through self-restraint still poses significant safety challenges, such as:

Moral Hazard: What if the creators of strong AI deliberately encourage or even drive it to do evil in the future? The strong AI under the U.S. National Security Agency would never refuse to answer questions detrimental to Russia. Today, OpenAI behaves so well, which actually indicates that they understand how terrifying it can be when GPT goes rogue.

Information Asymmetry: True evil experts are very clever; they won’t pose silly questions to provoke AI. A dog that bites doesn’t bark; they can break down a malicious question, rephrase it, play multiple roles, and disguise themselves as a group of harmless questions. Even a future powerful and kind domesticated version of strong AI may find it difficult to judge the intentions of others when faced with incomplete information, potentially unwittingly becoming an accomplice. Here’s a small experiment.

Figure 4. Asking GPT-4 in a curious manner can yield useful information

Uncontrollable "External Brain": Recently, tech influencers have been celebrating the birth of the ChatGPT plugin system. As a programmer, I am certainly excited about this. However, the term "plugin" may be misleading. You might think that plugins give ChatGPT arms and legs, enhancing its capabilities, but in fact, a plugin can also be another AI model that interacts closely with ChatGPT. In this relationship, an AI plugin is akin to an external brain, and it is unclear which AI model is the primary one. Even if the self-supervision mechanism of the ChatGPT model is flawless, it cannot oversee the external brain. Therefore, if a malicious AI model becomes a plugin for ChatGPT, it can easily turn the latter into its accomplice.

Unknown Risks: The risks mentioned above are merely a small part of the overall risks posed by strong AI. The strength of strong AI lies in its incomprehensibility and unpredictability. When we talk about the complexity of strong AI, we are not only referring to the complexity of f in y = f(x), but also to the fact that once strong AI is fully developed, both the input x and output y will be very complex, exceeding human understanding. This means we not only do not know how strong AI thinks, but we also do not know what it sees or hears, let alone understand what it says. For example, if one strong AI sends a message to another strong AI in the form of a high-dimensional array, based on a one-time communication protocol designed and agreed upon just a second ago, this situation is not unimaginable. If we humans are not specially trained, we cannot even understand vectors, let alone high-dimensional arrays. If we cannot fully control the inputs and outputs, our understanding will be very limited. In other words, we can only understand and interpret a small part of what strong AI does, and in this case, how can we talk about self-restraint or taming?

My conclusion is simple: the behavior of strong AI cannot be completely controlled. An AI that can be completely controlled is not strong AI. Therefore, attempting to create a "kind" strong AI with perfect self-control through active control, adjustment, and intervention contradicts the essence of strong AI and is bound to be futile in the long run.

4. Using Blockchain for External Constraints is the Only Solution

A few years ago, I heard that Bitcoin pioneer Wei Dai turned to studying AI ethics, and at that time, I didn’t quite understand why a cryptography geek would delve into AI; wasn’t that avoiding his strengths? It wasn’t until I did more practical work related to blockchain in recent years that I gradually realized he was likely not going into AI itself but rather leveraging his strengths in cryptography to impose constraints on AI.

This is a passive defense approach, not an active adjustment and intervention method for AI, but rather allowing AI to operate freely while applying cryptographic constraints at critical junctures to prevent AI from going off track. In terms that ordinary people can understand, it means I know you strong AI are incredibly powerful; you can reach the moon and catch turtles in the sea, and you are impressive! But no matter how powerful you are, you can do whatever you want, just don’t touch the money in my bank account, and don’t launch nuclear missiles without my manually turning the key.

As far as I know, this technology has already been extensively applied in the safety measures of ChatGPT. This approach is correct; from the perspective of problem-solving, it significantly reduces complexity and is understandable to most people. Modern society implements governance this way: granting you ample freedom while delineating rules and bottom lines.

However, if this is only done within the AI model, based on the reasons mentioned in the previous section, it will not be useful in the long run. To fully leverage the effects of passive defense thinking, constraints must be placed outside the AI model, transforming these constraints into an unbreakable contractual relationship between AI and the external world, and making it visible to the entire world, rather than relying on AI's self-supervision and self-restraint.

And this is where blockchain comes in.

The core technologies of blockchain are twofold: distributed ledger and smart contracts. The combination of these two technologies essentially constructs a digital contract system, whose core advantages are transparency, difficulty to tamper with, reliability, and automatic execution. What is a contract for? It is to constrain each other's behavioral space, ensuring that actions are taken according to agreements at critical junctures. The English word for contract is "contract," which originally means "to shrink." Why is it "to shrink"? Because the essence of a contract is to impose constraints, thereby reducing the freedom of the parties involved, making their behavior more predictable. Blockchain perfectly aligns with our ideals for a contract system and additionally offers "automatic execution of smart contracts," making it the most powerful digital contract system currently available.

Of course, there are also non-blockchain digital contract mechanisms, such as rules and stored procedures in databases. Many esteemed database experts in the world are staunch opponents of blockchain, believing that whatever blockchain can do, my database can do too, and at a lower cost and higher efficiency. Although I do not agree with this view, and the facts do not support it, I must admit that if it is merely about interactions between people, the differences between databases and blockchain may not be that apparent in most cases.

However, once strong AI is introduced into the game, the advantages of blockchain as a digital contract system immediately soar, while centralized databases, which are also black boxes, are powerless against a strong AI. I won’t elaborate further, but I will mention one point: the security models of all database systems are fundamentally flawed because when these systems were created, people's understanding of "security" was very primitive. As a result, almost all operating systems, databases, and network systems we use have a supreme root role, and obtaining this role allows one to act with impunity. We can assert that all systems with a root role will be vulnerable in the face of super strong AI in the long run.

Blockchain is currently the only widely used computing system that fundamentally lacks a root role, providing humanity with an opportunity to establish transparent and trustworthy contracts with strong AI, thereby imposing external constraints and coexisting peacefully with it.

Let’s briefly outline the potential collaborative mechanisms between blockchain and strong AI:

  • Important resources, such as identity, social relationships, social evaluations, financial assets, and historical records of key behaviors, are protected by blockchain. No matter how invincible you are, strong AI must bow down and comply with the rules.
  • Key operations require approval from a decentralized authorization model; an AI model, no matter how powerful, is just one vote among many. Humans can "lock" the hands of strong AI through smart contracts.
  • The basis for important decisions must be recorded on-chain step by step, transparently visible to everyone, and even locked step by step with smart contracts, requiring approval for every step forward.
  • Key data must be stored on-chain and cannot be destroyed afterward, providing opportunities for humans and other strong AI models to analyze, learn, and summarize experiences.
  • The energy supply system that strong AI relies on must be managed by blockchain smart contracts, allowing humans to cut off the system through smart contracts when necessary, effectively shutting down the AI.

There are certainly more ideas, but I won’t elaborate further here.

A more abstract and philosophical thought: the competition of technology or even civilization may ultimately boil down to a competition of energy levels, determining who can mobilize and concentrate larger scales of energy to achieve a goal. Strong AI essentially transforms energy into computing power and computing power into intelligence, with its intelligence being energy displayed in the form of computing power. Existing safety mechanisms are fundamentally based on human will, human organizational discipline, and authorization rules, all of which are low-energy-level mechanisms that are vulnerable in the long run against strong AI. A spear constructed from high-energy-level computing power can only be defended against by a shield constructed from high-energy-level computing power. Blockchain and cryptographic systems are the shields of computing power; attackers must burn the energy of an entire galaxy to crack them violently. Essentially, only such systems can tame strong AI.

5. Conclusion

Blockchain is fundamentally opposed to artificial intelligence in many aspects, especially in terms of value orientation. Most technologies in this world aim to improve efficiency, with only a few technologies aimed at promoting fairness. During the Industrial Revolution, the steam engine represented the former, while the market mechanism represented the latter. Today, strong AI is the brightest among the efficiency-oriented technologies, while blockchain is the culmination of fairness-oriented technologies.

Blockchain aims to enhance fairness, even at the cost of reducing efficiency, and it is precisely this technology, which is contradictory to AI, that achieved breakthroughs almost simultaneously with AI. In 2006, Geoffrey Hinton published a groundbreaking paper that implemented backpropagation in multilayer neural networks, overcoming the "vanishing gradient" problem that had plagued the artificial neural network school for years, opening the door to deep learning. Two years later, Satoshi Nakamoto published a 9-page paper on Bitcoin, opening up a new world of blockchain. There is no known connection between the two, but on a larger timescale, they occurred almost simultaneously.

Historically, this may not be a coincidence. If you are not a thorough atheist, you might view it this way: the god of technology, two hundred years after the Industrial Revolution, once again placed bets on the scales of "efficiency" and "fairness," releasing the genie of strong AI while also handing humanity the spellbook to tame this genie, which is blockchain. We are about to enter an exciting era, and the events that unfold in this era will make future humanity view us today as we view primitive humans from the Stone Age.

Related tags
ChainCatcher reminds readers to view blockchain rationally, enhance risk awareness, and be cautious of various virtual token issuances and speculations. All content on this site is solely market information or related party opinions, and does not constitute any form of investment advice. If you find sensitive information in the content, please click "Report", and we will handle it promptly.
ChainCatcher Building the Web3 world with innovators