in ,

Tech leaders call for pause of GPT-4.5, GPT-5 development due to ‘large-scale risks’

With the introduction of numerous tools and bots like OpenAI’s ChatGPT, Google Bard, and others, generative AI has recently advanced at an astounding rate. Yet seasoned experts in the field of artificial intelligence are deeply concerned about this rapid progress; more than a thousand of them have signed an open letter urging AI researchers to put the brakes on.

The Future of Life Institute, an organization whose stated aim is “guiding transformative technology towards benefiting life and away from extreme large-scale hazards,” released the letter on its website. Elon Musk, the CEO of Twitter, Steve Wozniak, and Andrew Yang are among the signatories.

The article demands that all businesses developing AI models that are more potent than the newly revealed GPT-4 immediately cease their efforts for a minimum of six months. This embargo should be “public and verifiable” and give time to “jointly establish and implement a set of shared safety guidelines for advanced AI design and development that are carefully verified and overseen by independent outside experts.”

According to the letter, this is necessary because “AI systems with human-competitive intelligence can pose grave hazards to society and mankind.” These dangers include the spread of propaganda, the elimination of jobs, the potential replacement and obsolescence of human existence, and the “loss of control of our civilization.” The writers also state that “unelected tech executives” shouldn’t make the decision to move forward with this future.

AI ‘for the clear benefit of all’

The letter was written shortly after assertions that GPT-5, the upcoming technology that powers ChatGPT, could attain artificial general intelligence. If true, that implies that it would be able to comprehend anything that a person can understand. Because of such, it might become extremely potent in ways that haven’t yet been fully understood.

Furthermore, the letter claims that responsible planning and management surrounding the development of AI systems is not taking place, “even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that nobody — not even their creators — can understand, predict, or reliably control.”

The letter argues that new governance frameworks must be established in order to control the advancement of AI, aid in the differentiation between AI- and human-generated material, hold research institutions like OpenAI accountable for any harm they do, and more.

The authors conclude on a positive note by saying that “humanity can have a flourishing future with AI… in which we reap the benefits, construct these systems for the clear advantage of all, and allow society a chance to adapt.” They claim that putting GPT-4 AI systems on hold would allow this to occur.

Will the letter have the desired impact? That’s difficult to say. There are undeniable financial and reputational motivations for OpenAI to keep developing advanced models. The letter’s authors, however, obviously believe that those incentives are too risky to follow given the myriad of potential problems and the lack of knowledge surrounding them.