The CEO of Twitter, Elon Musk and the historian famous for the book A Brief History of Humankind , Yuval Noah Harari , among other specialists, researchers and scientists, decided to sign a letter in which they ask for a pause in the updating and development of AI (Artificial Intelligence) .
The letter in question was produced by the Future of Life Institute (Instituto do Futuro da Vida, in free translation) and recommends a hiatus in development of at least AIsix months, and that it be transparent. Learn more about the document’s content and discussions around the topic below.
What does the letter against AI say about artificial intelligence?
Self-employment and the rise of misinformation are two of the main concerns with AI technologies highlighted in the letter. The content of the text also demands that the moment of suspension of development with tools like GPT-4 , Midjourney and similar AI solutions needs to be “public and verifiable”.
Regarding investments, the letter also addresses how OpenAI , Microsoft and Google are in a technology race without looking closely at how governments and public institutions are struggling to keep up with each new step. That is, according to the document, the AI sector is entering a speed that is increasingly difficult to follow and even the actors involved with the technology recognize this.
“If this pause cannot be regulated quickly, governments should step in and institute a suspension”
Letter from the Future of Life Institute

The text also mentions AI laboratories that produce technologies, but are unable to make accurate assessments and are not even capable of having control over these digital minds. What intrigues the signatories — from Apple co-founder Steve Wozniak to pioneering researchers like Yoshua Bengio and Stuart Russel to former Google product philosopher Tristan Harris — are questions surrounding the scope of information dissemination. falsehoods, the replacement of human minds and the very “control of our civilization”. In this sense, the document also pays attention to the best investment moment:
“Powerful AI systems should only be developed when we are confident that their effects will be positive and their risks will be managed.”
Letter from the Future of Life Institute
Future of Life Institute and Responsible Technology
Founded in 2015, based on an investment by Elon Musk, the Future of Life Institute seeks to keep new technological advances away from possible risks to life, both for humanity and for other living beings. The organization pays attention not only to AI, but also monitors productions in areas such as biotechnology and nuclear energy. This institute’s effort to support technological responsibility has as its vision a world where diseases are eradicated and democracies are strengthened around the world.

Future of Life’s actions were recognized by the United Nations (UN) in 2020: the UN appointed the institute as a representative of civil society on issues involving AI. Earlier this year, however, the president of that technology watchdog, Max Tegmark, had to apologize over a misguided investment in a far-right media platform, Sweden’s Nya Dagbladet.
What is the position of other companies on AI development?
Established companies such as Google and Microsoft declined to comment on the Future of Life Institute’s letter . The two companies are moving more and more to offer AI solutions, in view of the productions of OpenAI , the company that created ChatGPT . By the way, the “mother” of one of the most famous AIs of the moment received from Microsoft 10 billion dollars in investment. Simultaneously, the company created by Bill Gates is using OpenAI technology in its search engine, Bing .

This month, coincidentally, Bill Gates even published a letter in which he addresses the possible effects that AI solutions can bring to the future . Google is maintaining its investments in artificial intelligence through Bard , which is not yet fully available to the public.
Expert concerns about AI and GPT-4
Even though it has been improved, the GPT-4 tool still delivers some hallucinatory results and may serve users using harmful language. But there are those who believe that the pause, in fact, should be to understand more about the benefits – and not just the harms of AI technologies. This is the view of Peter Stone, a researcher at the University of Texas at Austin (USA).
I think it’s worth having a bit of experience with how [AIs] can be used properly or not, before going on to develop the next [technology]. This shouldn’t be a race to produce a new model of artificial intelligence and release it before the others.”
Peter Stone
In contrast to this position, one of the signatories, Emad Mostaque, founder of the company Stability AI (dedicated to tools with artificial intelligence), told Wired that AI solutions are a possible threat to the very existence of society. He also gave importance to how investments should be reviewed, taking more into account what may be in the future of this technology race.
“It’s time to put commercial priorities aside and take a break for the good of all, more to research rather than enter a race with an uncertain future.”
Emad Mostaque
Source: Wired | Tech Crunch | Future of Life | Deputy World News