Torch, Bridge, Key, and Future
"What is a Question?"
This is also a question. Generally speaking, it is a simple action.
In our view, questioning breaks the dimensional wall between the known and the unknown, establishing a connection between the past and the future in the present. From a child pointing to the fresh world and asking their parents what it is, to Plato's ultimate three questions in "The Republic": "Who am I? Where do I come from? Where am I going?"
Questioning is a torch, a key, and a fundamental human need and element of progress.
The premise of changing the world is to understand the world, and the path to understanding is limited by this physical body, starting from perception, from the concrete to the abstract. Our questions develop in the same way: what is it, what to do, why. The world of web3 has only developed over the past fifteen years, and its establishment is still in progress; change is actually happening every moment. We believe that if we hope to change, the reformer's understanding and perception of this world are even more important.
"Characteristics of Information in the Web3 World"
Compared to the web2 world, the information in the web3 world presents the following characteristics:
1. The situation of new concepts and information entropy increasing every moment is severe.
2. The massive amount of data is difficult to digest and understand.
3. In the process of establishing a new world order, the lack of principles and legal reasoning, along with the different perspectives and interests of those in power, leads to different guiding directions.
Web3 represents the future, with digital currencies, the metaverse, VR/AR, technological development, and user demands. The constraints of the physical world will be completely unlocked in web3, and the infrastructure of the atomic world is gradually being built in web3. No one can stop the power of this tide. All of this, driven by entities like OpenAI, has led us to vaguely find a path in the present. QnA3.AI is born at the right time.
As an AI-powered web3 knowledge-sharing platform, we have a precise understanding of semantics and user intent through training our own specialized large language model. Within a massive data source, we filter valuable information for compilation and reconstruction, providing users with accurate and meaningful answers. We meet users' needs to eliminate noise and gain insights in the web3 world, further helping them build their own cognitive systems and thinking frameworks.
"Comparison Between Specialized Models and General Models"
Our judgment is that the market's endgame will present a state of several super players, a few strong players, and several small and medium players coexisting.
The limitations of manpower, equipment, and funding for training general large models mean that ultimately only 3-5 international players will possess the true capability of general large models. Accompanying this will be various types of language models, industry models, and scenario models. However, general large models cannot solve the specific problems of all segmented industries.
First, quality is the core requirement of segmented industries, and the hallucination problem of general large models is unacceptable under the high-quality pursuit of segmented industries. We have exclusive data screening sources and filtering mechanisms to avoid this issue through multiple processes.
Second, professionalism as the core of quality requires high quality in the workflows of segmented industries, rewarding any quality improvement actions. Any AI solution applied to segmented industries needs continuous adjustments to improve quality. Response speed and feedback efficiency are crucial for quality. The high compatibility of general models can lead to quality gaps here; you can imagine if a rocket's AI model were replaced by a general model like GPT-4, the result would be catastrophic. This quality gap inevitably leads to specialized adjustments.
Moreover, proprietary data and proprietary knowledge are barriers.
Many high-value, specific fields rely on rich proprietary datasets. The best AI solutions for these segmented industries need to be trained on this data. However, entities that own these databases will focus on protecting their data moats and are unlikely to allow unrestricted access to third parties for AI training. Therefore, these entities will establish specialized AI systems for these workflows internally or through specific partnerships. These systems will differ from general AI models.
Segmented industries must continue to train user data, industry data, and even graphs or rules into the model; this is the necessity for the existence of industry large models. In local industries that general large models cannot cover, incorporating such data can effectively solve issues related to industry-specific knowledge and overcome the "hallucination" problem brought by general large models. This means that models need to have more and more converging scenarios, and it also means that more power is needed to help align technology and scenarios, rather than relying on a universal technology to adapt to all scenarios.
"Competitive Differences Among Specialized Large Language Models"
We believe the core of the difference lies in different cognitions and actions guided by those cognitions. Almost all AI applications on the market are either trained based on open-source models or based on API calls from OpenAI or Claude2. Regardless of the path taken, we believe that ultimately creating a large model that we recognize hinges on having relatively leading human cognition and the ability to continuously iterate the model.
Thus, we have reached a consensus with peer players that the foundational strategy is to use LLM as the L0 base, upon which we build L1 industry models and L2 scenario models. By doing this layer by layer, we interact with customers through question and answer, as well as through various functions in our product design to gather feedback, iterating the model bit by bit and gradually establishing barriers. Even if better general models emerge in the future, there will be ways to retrain or continue iterating based on them. LLMs "rise with the tide," and we all grow alongside those who are more capable than us.
On the action level, we have rich experience in Pre Train, Post Train, Multi-Modal, Scaling Up, Inference, etc., and we integrate this development with the product market. We believe in the impact of systems and the power of compounding, and practicing this belief is the source of our confidence.
"Conclusion"
The wave of progress in artificial intelligence has been ongoing for a decade, and we expect a rich ecosystem to emerge, including high-value, specialized segmented industry AI models driven by AI components, as well as some general AI models supporting a wide range of different AI workflows. Amid the wave of web3, QnA3 will be the universal key for web3 native users and the best bridge for the web2 world to enter the web3 world.