The development of AI: From Symbolism to AI 2.0

Dot Labs
2024-07-24 19:06:41
Collection
Web3 AI Deep Report

Author: Dylan Wang

Report Highlights:

Tracing back: The development of AI has progressed in waves. Since the 1960s with symbolic and connectionist approaches, AI technology has continuously made breakthroughs. From theoretical research to practical applications, from advancements in single fields to a comprehensive explosion across various application directions, and now with the arrival of the AI 2.0 era, AI has deeply integrated into modern life. The development of AI is characterized by multiple layers and waves. In 2023, we experienced an explosion of large models, and the next step, AI agents and ultra-large models capable of self-supervised learning, will lead future growth.

Riding the waves: AI is stepping into the 2.0 era. AI 1.0, centered around CNN (Convolutional Neural Network) models, surpassed human capabilities in fields such as computer vision and natural language understanding. However, the limitations of AI 1.0 are also quite evident, such as the high cost of data collection and the low vertical utility between models. AI 2.0 overcomes the limitations of AI 1.0's single-domain and multi-model approach, opening a new era for AI development. The first phenomenal application of the AI 2.0 era is generative AI, which can summarize and generate new content based on existing data. Its disruptive potential is expected to contribute approximately $7 trillion in value to the global economy.

1. The Past and Present of AI

AI refers to technologies that enable machines to possess intelligence comparable to that of humans. The field of AI emerged in 1950, but it wasn't until the release of Chat GPT 3.5 at the end of 2022 that a wave of AI popularity truly began. In just two months, the number of users grew by over 100 million. The reason for this explosion in AI is fundamentally that this round of AI is no longer limited to narrow, specialized domains; machines have finally become generalists, evolving into general artificial intelligence, capable of learning and interacting with humans across various fields without discrimination. AI has since begun to fully intervene in people's lives and production.

Throughout the development of artificial intelligence, people from different eras and academic backgrounds have had varying understandings of intelligence and its realization methods, leading to the emergence of different schools of thought. Influential schools and their representative methods are shown in Figure 2:

From its inception, artificial intelligence (AI) has been exploring an unknown path, with ups and downs. We can roughly divide this development history into five stages:

Initial Development Stage: 1943 - 1960s. After the concept of artificial intelligence was proposed, symbolic and connectionist (neural network) approaches emerged, achieving a series of remarkable research results, such as machine theorem proving, checkers programs, and human-computer dialogue, marking the first peak in AI development.

Reflective Development Stage: 1970s. The breakthrough progress in the early stages of AI greatly raised people's expectations for artificial intelligence. People began to attempt more challenging tasks; however, the lack of computational power and theoretical foundations led to the failure of unrealistic goals, causing AI development to enter a valley.

Application Development Stage: 1980s. AI entered a new peak of application development. Expert systems simulated the knowledge and experience of human experts to solve problems in specific fields, achieving a significant breakthrough in moving AI from theoretical research to practical application and from general reasoning strategies to the use of specialized knowledge. Meanwhile, machine learning (especially neural networks) began to explore different learning strategies and methods, slowly reviving in numerous practical applications.

Stable Development Stage: 1990s - 2010. The rapid development of internet technology accelerated innovative research in artificial intelligence, promoting the practical application of AI technology, with significant progress in various related fields. In the early 2000s, due to the need for excessive explicit rules in expert system projects, efficiency decreased and costs increased, shifting the focus of AI research from knowledge-based systems to machine learning.

Thriving Development Stage: 2011 to present. With the development of information technologies such as big data, cloud computing, the internet, and the internet of things, ubiquitous sensing data and computing platforms like graphics processors have propelled the rapid development of AI technologies represented by deep neural networks, significantly bridging the technological gap between science and application. Major breakthroughs have been achieved in AI technologies such as image classification, speech recognition, knowledge question answering, human-computer games, and autonomous driving, ushering in a new peak of explosive growth.

Currently, we are in the second wave of the AI revolution. Over the past year, AI has undergone a round of hype, and the market believes that all current AI capabilities have been fully explored, lacking new highlights. However, the development of AI is characterized by multiple layers and waves. Based on the capabilities of foundational large models, we are now in the second phase of artificial intelligence, with many breakthroughs worth looking forward to, including multimodal, AI agents, mixed reality, and embodied intelligence. In the face of such a significant transformation sweeping across various industries, we should not overly focus on the short term but rather pay attention to the long-term progress and application potential of the industry.

2. The Arrival of the AI 2.0 Era

1. Transitioning from AI 1.0 to AI 2.0

AI 1.0 is centered around CNN (Convolutional Neural Network) models in computer vision technology, marking the beginning of the AI perception intelligence era, where machines began to surpass humans in fields such as computer vision and natural language understanding, creating significant value. However, AI 1.0 also encountered bottlenecks; most industries wishing to utilize AI needed to spend enormous costs to collect and label data, and these datasets and various models became "islands," lacking vertical utility. This is why most AI 1.0 companies invested heavily in R&D but still faced long-term losses. Additionally, AI 1.0 lacked the scalability seen in Windows during the internet era and Android during the mobile internet era, which would lower the barriers to application development and create a complete ecosystem. Years later, AI 1.0 has yet to achieve true commercial success.

Today, the significant leap of AI 2.0 lies in overcoming the single-domain and multi-model limitations of its predecessor, allowing for the training of a foundational model using supermassive data without manual labeling. This foundational model can adapt and execute a variety of tasks through fine-tuning, genuinely promising a platform effect and exploring innovative commercial application opportunities.

AI 2.0 has three distinct characteristics:

First, it can perform self-supervised learning on supermassive data without manual labeling.

Second, the foundational model is very large, requiring thousands of GPUs for training.

Third, the trained foundational model possesses cross-domain knowledge, and subsequent fine-tuning can reduce costs to adapt to tasks in different fields.

With these characteristics, the AI 2.0 era is indeed a refinement and improvement over the 1.0 era.

The development paradigm of AI 2.0 is iterative, transitioning from "assisting humans" to "fully automated" through three stages:

The first stage is human-machine collaboration. Productivity tools will be the first to upgrade, with all user interfaces being redefined: document tools will no longer require word-for-word input; instead, users will inform AI of the desired article style; drawing software will create images based solely on textual descriptions. In this stage, humans will still collaborate with AI, filtering and correcting AI-generated content to prevent errors and disasters.

The second stage is partial automation. Applications and industries with high fault tolerance will achieve AI automation, such as advertising, e-commerce, search engines, and game production.

The third stage is full automation. AI will become fully automated and applicable anywhere, achieving breakthroughs in error-sensitive fields, with applications such as AI doctors and AI teachers becoming possible.

2. Phenomenal Application of AI 2.0: Generative AI

The first phenomenal application of the AI 2.0 era is generative AI (Generative AI), which is currently popular as AIGC (Artificial Intelligence Generated Content). Generative AI can achieve self-supervised learning without labeling, with AI gradually transitioning from "assisting" humans to "replacing" human labor, and all user interfaces will be redefined.

Before 2010, AI was led by decision-making AI, which learned the conditional probability distribution within data. Its underlying logic involved AI extracting feature information from samples and matching it with feature data in the database, ultimately classifying the samples, primarily focusing on recognition and analysis of samples. After 2011, with the emergence of deep machine learning algorithms and large-scale pre-trained models, AI began to enter the era of generative AI. The characteristic of generative AI is its ability to summarize and generate new content based on existing data, gaining capabilities in learning, execution, and social collaboration on top of decision-making AI's decision-making and perception abilities. Currently, artificial intelligence continues to develop along the two main lines of generation and generality.

Advantages of Generative AI:

Training AI to predict the next content

No need for labeled data

The disruptive potential of generative AI is increasingly recognized by enterprises, which are no longer questioning what generative AI is but rather seeking to understand the specific business value that generative AI investments can bring. Gartner predicts that by 2026, over 80% of enterprises will use generative AI APIs or models or deploy applications supporting generative AI in production environments, while this proportion was less than 5% at the beginning of 2023.

Technological changes are driving the expansion of scenarios, and generative AI is moving from heated discussions to practical applications, with its value creation potential being astonishing. McKinsey predicts that generative AI is expected to contribute approximately $7 trillion in value to the global economy and increase the overall economic benefits of AI by about 50%.

3. AI 2.0 Empowering Various Industries

According to Innovation Works, the future of AI 2.0 mainly focuses on three directions: AI 2.0 Intelligent Applications; AI 2.0 Platforms; AI 2.0 Infrastructure. The first is intelligent applications. AI 2.0 applications will enter a phase of widespread development, including vertical AI assistants across various industries and applications in the metaverse that were previously unattainable. In addition to new applications, many existing applications can be rewritten, such as search engines, content creation, and advertising marketing. AI 2.0 will revolutionize user experiences and create entirely new business models, containing immense imaginative potential. The second is commercial platforms. AI 2.0 platforms will accelerate the R&D and commercialization of the new generation of AI 2.0 applications, with strategically positioned AI 2.0 platform companies driving the ecological cycle and healthy competition of AI 2.0. The third is infrastructure. Beyond applications and platforms, the infrastructure supporting AI model operation, management, and training is also a key focus area. This includes AI chip companies that support the training of giant AI 2.0 models and innovative technology companies that can accelerate, reduce costs, and simplify AI training.

Specifically, AI 2.0 will accelerate the ignition of commercial potential in six major areas, entering a phase of explosive application growth that enhances productivity:

AI 2.0 + E-commerce/Advertising: In the AI 2.0 era, e-commerce and advertising will be more driven by AI big data, capable of real-time testing and dynamic adjustments, even incorporating social hotspots from minutes ago into advertising content to maximize conversion rates. Content will be tailored and generated in real-time for different audiences, truly achieving "one-to-one" marketing.

AI 2.0 + Film/Entertainment: AI can customize television and short video content based on public preferences, making its creative content more appealing to the audience, achieving better ratings and reputation. AI + multimodal creation will become the mainstream of the next generation of entertainment, with AI-assisted creation gradually building a new creative industry ecological value chain.

AI 2.0 + Search Engines: Future search engines will shift from traditional retrieval models to a "question-answer" model. The next generation of conversational search engines will become the "Holy Grail of AI 2.0" that global tech giants compete for, and the current search advertising business model will also undergo transformation. However, due to people's expectations for "precision" in search results, current technology still requires significant improvements to achieve effective question-answer search.

AI 2.0 + Metaverse/Games: AI 2.0 will greatly reduce the content generation costs in virtual worlds such as games and the metaverse. For instance, AI can serve as a real-time chat companion, enhancing interactive enjoyment, increasing entertainment value, and encouraging user participation to maximize gaming duration. The imaginative content generation of AI multimodality will also become the backbone of the metaverse.

AI 2.0 + Finance: Faster, more accurate, and smarter content production methods will significantly enhance the timeliness and output of financial news and market research analysis. However, given the seriousness of financial content, manual fact-checking and verification remain essential. AI can also automate the production of financial information and the launch of financial products, improving the efficiency and quality of information flow and transaction volume in financial institutions.

AI 2.0 + Healthcare: AI can quickly and accurately analyze a patient's overall health status, incorporating all data, biological characteristics, physical examinations, medical history, and personal model predictions, becoming a valuable assistant to doctors and significantly accelerating scientific diagnosis and treatment decisions. With AI, more targeted drug development can be achieved, enabling personalized medical triage and treatment plans, promoting the arrival of "personalized medicine."

ChainCatcher reminds readers to view blockchain rationally, enhance risk awareness, and be cautious of various virtual token issuances and speculations. All content on this site is solely market information or related party opinions, and does not constitute any form of investment advice. If you find sensitive information in the content, please click "Report", and we will handle it promptly.
ChainCatcher Building the Web3 world with innovators