Friday, May 3, 2024
Google search engine
HomeAIMLShould You Worry If Machines Are Getting Better at Coding?

Should You Worry If Machines Are Getting Better at Coding?

Toran Bruce Richards, the inventor of Auto-GPT, thinks the technology may be able to prevent the loss of a large number of jobs due to automation brought on by closed-source AI.

We have all seen the struggles of human coders who were trying to complete a task without a fight in the past. Imagine instead a world in which machines have mastered the art of programming, eliminated defects, and minimised downtime all on their own, thanks to the development of foundational models (GPTx). 

What’s this? It’s already taking place. Here is Auto-GPT, an open-source, self-learning model with many of the same features as GPT-4, including the ability to create and manage outputs. So, do you need to worry? 

Andrej Karpathy, a former director of Tesla who recently joined OpenAI, supports the idea and asserts that “AutoGPTs are the next frontier of prompt engineering.” Karpathy stated this in a tweet on the most recent version of Auto-GPT, which can run Python scripts and develop its own code using GPT-4. Also, it has a voice!

Katrina Strikes

Karpathy provided an intriguing perspective on the model. He asserted that, in contrast to humans, GPTs are totally unaware of their own advantages and disadvantages, such as their small context window and weak mental math skills. Sometimes unanticipated outcomes can be the result of this. However, agents that can perceive, think, and behave in accordance with objectives specified in English prompts can be constructed by connecting GPT calls in loops.

Karpathy introduced a “reflect” phase for feedback and learning, where results are assessed, rollouts are stored in memory, and loaded to prompts for few-shot learning. This few-shot “meta-learning” method enables learning on anything that can fit into the context window. However, because there aren’t any pre-made APIs for supervised fine tuning, reinforcement learning from human input, or LoRA finetunes, which restrict fine tuning on a lot of experience, the gradient-based learning approach is less simple.

Karpathy thinks that AutoGPTs may develop into AutoOrgs with AutoCEO, AutoCFO, AutoICs, and more, similar to how personnel congregate in organisations to specialise and work in parallel towards common goals.

Supporting AutoGPTs

The Auto-GPT repository has received over 8,000 stars just a week after it was published, signalling its growing popularity. The release also triggered a flurry of conversations among developer communities. While some people have praised its powers, others have noted that it still needs human debugging. One user even compared the model’s coding to the old-fashioned method of rubber duck debugging.

Different Reddit members have provided their opinions on the subject. Some people have voiced the wish that the base models won’t be made accessible to the general public due to worries about possible abuse. Others, however, contend that keeping it secret would make the AI even riskier. The AI might be taken over by a few people to monitor and control every activity of the public, which is a potential drawback of keeping all progress under wraps.

A commenter’s suggestion of making the model publicly accessible along with the resources and tools required to assure ethical experimentation is one potential answer. This would enable ethical researchers to take preventative action to thwart any potential rogue AI scenarios. The commenter basically said, “The only way to thwart a malicious AI is through a benevolent AI.” 

This post is for you if you want to try AutoGPT but are unable to set it up yourself. Toran Richards will test out some of the top queries and record the results for you if you post them below.

RELATED ARTICLES
- Advertisment -
Google search engine

Most Popular

Recent Comments