Sunday, April 28, 2024
Google search engine
HomeAIMLThe AI Safety Proposed by OpenAI is More Fluff Than Reality

The AI Safety Proposed by OpenAI is More Fluff Than Reality

Without exactly explaining the “how,” the corporation adheres to the premise of “building increasingly safe AI systems.”

The timing of OpenAI’s thorough blog on AI safety, which was published yesterday, couldn’t have been better given the growing uprising against the rapid expansion of AI models. However, the company’s statement merely made a passing reference to the actual issues that ChatGPT is experiencing. Without exactly explaining the “how,” the corporation adheres to the premise of “building increasingly safe AI systems.”  

Conflicting Statements?

A thorough description is provided of how OpenAI is addressing safety. OpenAI claimed in the statement that they spent more than six months “across the organisation” after GPT-4 was trained in order to make it “safer and more aligned” before releasing it. It’s interesting to note that Sam Altman acknowledged “they have not yet discovered a way to align a super powerful system” when questioned about AI alignment in Lex Friedman’s podcast last month.

Regulations 

OpenAI agrees that “rigorous safety evaluations” should be conducted and will work with governments to develop the “best form” of legislation. The statement can also be seen as a peace offering given that Sam Altman’s globe tour, which is scheduled to begin in the coming months, would likely involve him socialising with various government leaders as well as engaging with users. This theory is also raised when it is remembered that Italy prohibited ChatGPT due to an alleged violation of European privacy laws. 

That which will alter? 

According to OpenAI, the business does not utilise data to sell products, advertise, or create profiles, but rather to make its models useful “for people.” Their LLMs receive training on data that is accessible to the general public. (data till 2021). It is unclear, however, what the business intends to do with the fresh information that consumers enter for their chatbots, including private and sensitive data.

It will be interesting to see how much these assertions encourage businesses to embrace ChatGPT and other OpenAI technologies. Recently, the company’s privacy policies have come under fire. Due to a recent error at the Samsung factory, where employees entered private information into ChatGPT, including programme source code and internal meeting notes, it is unknown what will happen to this material. Samsung Semiconductor created an internal AI tool for internal usage as a result of the series of occurrences in order to prevent more sensitive data leaks. 

According to OpenAI, factual accuracy can be increased by incorporating user comments. According to reports, GPT-4 generates 40% more factual information than GPT-3.5. The statement does not completely address biases and hallucinations, which are the most frequent issues with chatbots. Only the fact that there is “much more work” to be done to lessen hallucinations and inform people about the tool’s limits is mentioned. 

The chatbot’s inaccurate information is steadily getting OpenAI into problems. The mayor of Australia, Brian Hood, is preparing to sue OpenAI for defamation because ChatGPT made false statements about his being imprisoned for bribery.

RELATED ARTICLES
- Advertisment -
Google search engine

Most Popular

Recent Comments