OpenAI introduces measures to ensure AI safety: rigorous testing before releasing new versions, building safety and monitoring systems
ChainCatcher news, the ChatGPT development company OpenAI has published an article titled "Our approach to AI safety" on its official website, introducing the company's deployment of measures to ensure the safety of AI models.
The article states that AI tools provide many benefits to humanity today. OpenAI also recognizes that, like any technology, these tools come with real risks, and therefore the company is working to ensure that safety measures are established at all levels of the system.
Before releasing any new system, OpenAI conducts rigorous testing, hires external experts for feedback, improves model behavior through techniques such as reinforcement learning with human feedback, and builds extensive safety and monitoring systems.
In addition, OpenAI believes that powerful artificial intelligence systems should undergo strict safety assessments. Regulation is needed to ensure such practices are adopted, and we actively collaborate with governments to develop the best forms of such regulation.
At the same time, society must have time to adapt and adjust to increasingly powerful AI. OpenAI will also focus on protecting children, respecting privacy, and improving factual accuracy as key aspects of ChatGPT's safety work, and will spend more time and resources researching effective mitigation measures and technologies, testing them against diverse real-world use cases. (source link)