Your input into ChatGPT is saved on servers run by OpenAI, where it may be used for their advantage.
While using ChatGPT to swiftly fix bugs in their source code recently, engineers from Samsung’s Semiconductor group unintentionally exposed crucial information. There have been three documented instances of employees using the programme to disclose sensitive information in less than a month.
In one of the occurrences, a staff member requested that ChatGPT optimise test sequences for locating chip defects. In another instance, a worker made a presentation from meeting notes using the technology.
Coincidentally, the leaks were discovered only three weeks after Samsung lifted a previous prohibition on its staff using ChatGPT due to concerns over precisely this problem. Samsung has now advised its staff not to use the chatbot as it is clear that efforts to recover the data that has already been gathered would be difficult.
A problem recently exposed information on ChatGPT members, including their personal and billing information as well as details of their chat history. In addition, ChatGPT informed 1.2% of its customers that their billing information, which included their first and last names, billing address, credit card type, credit card expiration information, and the last four digits of their credit card, may have been seen by another customer during a 9-hour power outage on March 20.
The breach was caused by a defect in the Redis client open-source package redis-py, according to an internal study by OpenAI.
“We had a significant issue in ChatGPT due to a bug in an open source library, for which a fix has now been released,” said Sam Altman, CEO and co-founder of OpenAI, in a tweet.
A few people have the ability to view the titles of other users’ previous conversations, he continued.
Even with its premium ChatGPT Plus subscription, OpenAI stated that they will no longer be retaining user data for the purpose of training the model, but users would need to opt out for that to happen. Additionally, the data wouldn’t be destroyed until one month had passed.
The information you enter into ChatGPT is therefore saved on OpenAI servers where it may be used, in their words, “to develop new programmes and services” or forwarded to Microsoft.
The constant stream of occurrences highlights the hazards associated with the efficiency that these tools help to attain, and the obvious question is how to reduce the possible risks of employing them in an environment where sensitive data is frequently handled.
Or is it a ban?
Because the chatbot does not adhere to the EU’s General Data Protection Regulation, which provides the “Right to be Forgotten,” ChatGPT was given a temporary ban in Italy last month. There is currently no method in place that allows users to ask that their data be removed from machine learning systems after it has been used to train the model.
The Indian government also stated last week that it has assessed the ethical issues with AI, such as bias and privacy, while taking steps to create a strong framework for regulations in the AI industry. However, it has not yet announced any plans to implement laws.
OpenAI has, in turn, placed the responsibility for resolving these issues with enterprises. For instance, Samsung has decided to create its own internal AI for usage by staff members, while limiting the length of ChatGPT prompts for staff members to a kilobyte or 1024 characters of text.
Employing the ChatGPT API in place of the tool is another way for businesses to get around this problem. Since the API is a for-profit service, OpenAI cannot access any of the data you provide into it. Additionally, OpenAI has provided a form in their terms of service that allows you to voluntarily opt-out of data tracking.