Learning Path for Investing in AI Field
In an online discussion on Saturday, a netizen asked what books to read, what magazines to subscribe to, and how to learn in the field of AI.
I believe that learning AI should be based on our goals.
For me, the purpose of learning AI is simple: not to become an expert in the field, nor to make a living in this area in the future, but purely to understand the development of this field so that I can find suitable investment opportunities within it.
With this goal in mind, I think the key is to understand the logic of AI so that we can have a general judgment about new developments in the AI field in the future.
To understand the logic of AI, I think we can start with some books that introduce the basic principles of AI.
In this regard, there is a book online that even Ultraman has recommended, titled "This is ChatGPT" (by Stephen Wolfram).
This book starts with the simplest basic concepts and introduces the mathematical principles and workings of large language models. As long as you can do basic addition, subtraction, multiplication, and division, you can read this book.
If you find it really difficult while reading, you can even just read the first few chapters; you can still have a general understanding of the principles of large language models without looking at the later chapters.
After understanding these principles, we will know why the training of large language models requires GPUs, data, and algorithms, and we will be able to understand what roles GPUs, data, and algorithms play in the training process of large language models.
If we think further, we can understand what optimizations NVIDIA has made to GPUs to improve the training efficiency of large language models, why NVIDIA has historically acquired some small companies for those optimizations, and what those acquired small companies actually do.
Following this logic, I roughly understand why many so-called decentralized computing projects on the market are "pseudo-projects"—not because the direction of decentralized computing is wrong, but because it is difficult to design an ideal decentralized computing system within the NVIDIA framework.
To truly achieve such a system, I believe it is necessary to reconstruct the design of GPUs. If we must use NVIDIA's framework to build such a decentralized computing system, then the system constructed can at most be considered an experimental or demonstration product, and it is difficult to truly become a strong competitor to centralized computing systems.
Once we have a basic understanding of AI principles, we no longer need to delve deeper into mathematics. Next, I will focus on the application scenarios and development trends of AI. In this regard, I have read "The Turning Point: Standing on the Eve of AI Disrupting the World" by Wan Weigang.
This book is good because it is very imaginative and has basic logical support, allowing us to rationally speculate and imagine what a world filled with AI might look like.
Apart from these two books, I haven't specifically read any other books; the rest are mostly articles I read online (such as on WeChat public accounts, Twitter), and I follow various new developments. Then, based on the new information provided in these articles and developments, I enrich and expand my understanding of AI.
For example, we know that the current ChatGPT is a large language model, primarily trained for AI's understanding of language. However, human intelligence is diverse. Besides language, we have many other ways to perceive the world. Many articles in the AI field introduce the development of other types of AI, such as behavioral models, spatial models, etc.
This knowledge enriches our horizontal understanding of AI, allowing us to see that AI development spans so many fields. Some of these fields are still in the research stage, while others have already shown promising signs, and in the coming years, they may also give birth to their own "ChatGPT." When these new "ChatGPT" appear, how much cloud, how much computing power, and how many GPUs will they need?
All of this can greatly enrich our understanding and imagination of investments in the AI field.
Additionally, I suggest that everyone read some insights and summaries from well-known venture capital firms regarding the development of AI.
For example, recently I read some insights from Sequoia Capital on the development of the AI field, which mentioned the potential emergence of an "agent economy," referring to the economic entities that may form through interactions between AI agents.
When discussing this economy, Sequoia Capital emphasized that it must have three elements:
First is permanent identity; second is seamless communication; third is security.
After reading these three elements, I immediately thought of blockchain. Aren't these three elements the killer features of blockchain technology?
Cryptographic wallets serve as the permanent identity of AI, interactions based on blockchain smart contracts provide uninterrupted seamless communication, and the decentralized, censorship-resistant nature ensures the security of AI agents.
These are some of the methods I use to learn and understand AI, for your reference.