AI tools not far away from being scary, need to get them right: OpenAI CEO- QHN


As ChatGPT takes the world by storm, its creator Sam Altman, CEO of OpenAI, has stressed that the world may not be “that far from potentially scary” artificial intelligence (AI) tools, and it’s important that such AI chatbots are audited independently before they reach the masses.

The artificial general intelligence (AGI) comes with serious risk of misuse, drastic accidents, and societal disruption.

“Because the upside of AGI is so great, we do not believe it is possible or desirable for society to stop its development forever; instead, society and the developers of AGI have to figure out how to get it right,” Altman said in a blog post.

At some point, it may be important to get independent review before starting to train future systems, and “for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models.”

“We think public standards about when an AGI effort should stop a training run, decide a model is safe to release, or pull a model from production use are important. Finally, we think it’s important that major world governments have insight about training runs above a certain scale,” Altman elaborated.

Companies are currently using ChatGPT to write codes, copywriting and content creation, customer support and preparing meeting summaries.

On the other hand, the general public is taking the help of AI chatbots to write essays, exams, poems and what not.

According to Altman, OpenAI wants to successfully navigate massive risks.

“In confronting these risks, we acknowledge that what seems right in theory often plays out more strangely than expected in practice. We believe we have to continuously learn and adapt by deploying less powerful versions of the technology in order to minimise “one shot to get it right” scenarios,” he noted.

OpenAI is now working towards creating increasingly aligned and steerable models.

“Our shift from models like the first version of GPT-3 to InstructGPT and ChatGPT is an early example of this,” he said.

Note:- (Not all news on the site expresses the point of view of the site, but we transmit this news automatically and translate it through programmatic technology on the site and not from a human editor. The content is auto-generated from a syndicated feed.))

Leave a Reply

Your email address will not be published. Required fields are marked *