The open letter proposing a freeze on further training of AI models while academics debate whether to save or kill the technology has polarised the AI community.
The greatest technological development since 1980, according to Bill Gates, is GPT. The most sophisticated big language model to yet, GPT-4, was made available by OpenAI last month. Many think that GPT-4 marks the turning point for artificial general intelligence (AGI), a more significant objective that OpenAI, the company that developed the GPT models, is adamant about accomplishing.
However, the quick and considerable advancements the discipline has experienced recently have raised worries among professionals in AI. In this context, a recent open letter calling on all AI labs to immediately halt the training of AI systems more potent than GPT-4 for at least six months was signed by a collection of AI specialists and sceptics that includes tech titans like Elon Musk, Gary Marcus, and Steve Wozniak, among others.
The letter contends that without adequate protections and checks and balances in place, the unmatched breakthroughs in AI could represent an existential threat to humanity (more than 1,100 people have signed it as of this writing). This halt should involve all important players and be made public and verifiable. Governments should intervene and impose a moratorium if such a stop can’t be rapidly implemented, it says.
The founder and CEO of Landing AI
Andrew Ng, referred to it as a bad idea to halt AI research. He thinks that technology is already having an impact on food, healthcare, and education, and that this will benefit a lot of people.
GPT-4 does indeed have some incredible use cases. In India, KissanGPT, a chatbot designed to assist farmers with their agricultural questions, was created using the technology. GPT-4 was recently employed by a dog owner to save his pet.
“While disregarding enormous benefits, banning matrix multiplications for six months is not a response to the possible risks AI poses. I think humanity will be able to embrace the wonderful aspects of any technology while figuring out safety precautions. There is no reason to halt progress, tweeted Anima Anandkumar, senior director of AI research at NVIDIA.
Yann LeCun, head AI scientist at Meta, who has criticised the technology, has also declined to sign the open letter in a similar manner. LeCun revealed his disagreement with the movement’s overall idea in a tweet.
What we believe
The rapid advancements in AI are undoubtedly exciting and rather terrifying at the same time. A hazardous concoction for the world can result from many big-tech companies dismantling their responsible AI teams, rushing to deploy AI models with minimal testing, and pressuring researchers to come up with new ideas.
Therefore, we think it’s crucial that the signatories have the same aims and that AI development and deployment are done in a responsible and ethical manner. The open letter, however, was badly written and used some questionable word choices.
In addition, the letter falls short of clearly describing the dangers posed by the level of AI technology today. Another contentious suggestion is to immediately stop all AI research. However, a broader conversation on the moral application of AI between developers and decision-makers is a positive development.