To better understand the capabilities, advantages, and hazards of GPT-4, OpenAI claimed to have delayed its deployment by more than six months.
OpenAI officially revealed its approach to AI safety, where it strives to design and deploy secure AI systems, in the midst of growing worries around the usage of GPT-4.
The company wrote in its blog post that investing more time and money in studying efficient mitigations and alignment strategies and testing them against actual abuse is a viable way to address AI safety concerns.
Additionally, it stated that enhancing AI skills and safety should work hand in hand. Working with its models, which are thought to be better at adhering to human instructions and being easier to steer or “guide,” has led OpenAI to conclude that its best safety work to yet.
The most recent development occurs after more than 11,000 people signed an open letter urging the suspension of large-scale AI projects for six months, especially the training of models that are more potent than GPT-4. Additionally, ChatGPT is forbidden in several nations. Due to privacy concerns, ChatGPT was recently banned in Italy. Many other nations have now done the same, including Spain.
To better understand the capabilities, advantages, and hazards of GPT-4, OpenAI claimed to have delayed its deployment by more than six months. It considers that occasionally going more slowly is vital to increase the security of the AI system. Additionally, it stated that in order to prevent anyone from cutting corners in order to succeed, policymakers and AI providers will need to make sure that AI development and deployment are efficiently managed globally.
To build a secure AI environment, OpenAI aims to take a collaborative approach while fostering open communication among stakeholders. It considers that this issue necessitates in-depth discussion, testing, and engagement—including the limitations of AI system conduct.
OpenAI Issues
OpenAI acknowledged that there are limits to what can be learned in a lab and stated that they are making every effort to eliminate anticipated dangers before distribution. The business said that it is unable to foresee both the constructive and malicious uses that individuals may make of their technology.
Because of this, OpenAI said, “we believe that learning from real-world use is a critical component of creating and releasing increasingly safe AI systems over time.” OpenAI also said that it closely monitors API partners in addition to cautiously and gradually releasing new AI systems to an ever-widening audience.
Additionally, according to OpenAI, its technology is not allowed to produce hateful, violent, harassing, or adult-themed content. According to the business, compared to GPT-3.5, GPT-4 is 82% less likely to reply to requests for content that is not permitted.
It said that a strong mechanism had been set up to keep an eye out for abuse. The business claimed that it prevents and notifies the National Center for Missing and Exploited Children when users attempt to submit cryptic information to their image tools (DALL.E 2).