In 2023 the world has witnessed the great innovations of artificial intelligence (AI). Depending on what is read, these advances came to improve people’s lives or destroy them completely in a kind of rebellion of the machines. One of the most impactful news this year was the launch of ChatGPT, which generated enthusiasm and also fear among people.
ChatGPT is part of a new generation of AI systems that can converse, generate readable text, and even produce novel images and videos based on what they have “learned” from a vast database of digital books, online writings, and other media. .
Derek Thompson, editor and journalist at the magazine The Atlantica series of questions were asked to find out if you really have to be afraid of the new advances in AI and think that lead to the end of the human race or if they are inspiring tools that will improve people’s lives.
Consulted by the American media, computer scientist Stephen Wolfram explains that large linguistic models (LLM) such as ChatGPT work in a very simple way: they create and train a neural network to create texts that are fed from a large sample of text. that is on the web such as books, digital libraries, etc.
If someone asks an LLM to imitate Shakespeare, they will produce a text with an iambic pentameter structure. Or if you ask him to write in the style of some science fiction writer, he will imitate the more general characteristics of that author.
“Experts have known for years that LLMs are awesome, they create fictional things, they can be useful, but they’re really stupid systems, and they’re not scary”, said Yann Lecun, chief AI scientist for Meta, consulted by The Atlantic.

The US media points out that the development of AI is concentrating on large companies and new companies backed by capital from technology investment firms.
The fact that these developments are concentrated in companies and not in universities and governments can improve the efficiency and quality of these AI systems.
“I have no doubt that AI will develop faster within Microsoft, Meta, and Google than it would, for example, the United States military.Derek Thompson notes.
However, the American media warns that companies could make mistakes when trying to quickly introduce a product that is not in optimal conditions to the market. For example, the Bing (Microsoft) chatbot was aggressive towards the people who used it when it was first released. There are other examples of other errors of this type, such as the Google chatbot that was a failure due to being launched in a hurry.
The philosopher toby ord warns that these advances in AI technology are not keeping pace with the development of ethics in the use of AI. consulted by The Atlantic, Ord made a comparison on the use of AI with “a prototype jet engine that can reach speeds never seen before, but without corresponding improvements in steering and control.” For the philosopher, it is as if humanity were on board a powerful Mach 5 jet but without the manual to steer the aircraft in the desired direction.
Regarding the fear that AIs are the beginning of the end of the human race, the outlet points out that systems like Bing and ChatGPT are not good examples of artificial intelligence. But yes they can show us our ability to develop a super intelligent machine.
Others fear that AI systems are not aligned with the intentions of their designers. On this subject, many machine ethicists have warned about this potential problem.

“How to ensure that the AI that is built, which could well be significantly more intelligent than anyone who has ever lived, is aligned with the interests of its creators and the human race?”, he wonders The Atlantic.
And the great fear about the previous question: a super intelligent AI could be a serious problem for humanity.
Another question that worries experts and is formulated by the American media: “Do we have more to fear from non-aligned AI or from AI aligned with the interests of bad actors?”
One possible solution to this is to develop a set of laws and regulations that ensure that the AIs that are developed are aligned with the interests of their creators and that these interests do not harm humanity. And developing an AI outside of these laws would be illegal.
However, there will be actors or regimes with dishonest interests that can develop AI with dangerous behaviors.
Another issue that raises questions is: How much should education change in relation to the development of these AI systems?
The development of these AI systems is also useful in other industries such as finance or programming. In some companies, some AI systems outperform analysts in picking the best stocks.
“ChatGPT has demonstrated good writing skills for demand letters, summary pleadings and judgments, and even drafted questions for cross-examination,” said Michael Cembalest, President of Investment and Market Strategy for JP Morgan Asset Management.
“LLMs are not replacements for lawyers, but can increase your productivity, particularly when legal databases like Westlaw and Lexis are used to train them.Cembalest added.
For a few decades, it has been said that AIs will replace workers in some trades such as radiologists. However, the use of AI in radiology remains an adjunct for clinicians rather than a replacement. As in radiology, these technologies are expected to serve as a complement to improve people’s lives.
Keep reading: