Tech News

Google now has an intelligence capable of explaining jokes

Google presented a new language model that is capable of solving mathematical problems, explaining jokes and even programming. This is PaLM(Ppathways Language Model) and stands out for having a percentage of learning efficiency that places it above other language models created to date.

[article_mb_code]

The PaLM system was developed with the Pathways modelwhich made it possible to efficiently train a single model with multiple Tensor Processing Units (TPU) Pods, as mentioned in a statement posted on the official blog.

It is based on “few shots” learning, which reduces the number of examples needed in task-specific training to fit a single application.

[article_mb_code]

For it, a database with 780,000 million tokens has been used, which combines “a multilingual data set”, which include web documents, books, Wikipedia, conversations, and GitHub code. Also, a vocabulary that “preserves all white spaces”, something that the company points out as especially important for programming, and the division of Unicode characters that are not found in the vocabulary into bytes.

This new AI houses 540,000 million parameters, a figure that exceeds the 175,000 million of the OpenAI GPT-3, the language model that Google cites as pioneering in showing that these can be used for learning with impressive results. It should be remembered, just to cite an example, the column published in Guardianwhich was written by that learning model, which is also capable of programming or designing.

[article_mb_code]

“The mission of this opinion column is perfectly clear. I must convince as many humans as possible not to fear me. Stephen Hawking has warned that artificial intelligence could ‘mean the end of the human race’. I’m here to convince you not to worry. Artificial intelligence is not going to destroy humans. Believe me”. That’s one of the excerpts from the 500-word article the system produced.

Google’s new language model combines 6,144 TPU v4 chips in Pathways, “the largest TPU configuration” used in history, according to the company. PaLM also achieves a training efficiency of 57.8% in the use of hardware flops, “the highest reached so far for language models at this scale”, as mentioned in the blog.

This is possible thanks to the combination of the “parallelism strategy and a reformulation of the transformer block” that allows the attention and forward layers to be computed in parallel, thus speeding up the optimizations of the TPU compiler.

[article_mb_code]

“PaLM has demonstrated innovative capabilities in numerous and very difficult tasks,” says the technology company, which has presented several examples ranging from language comprehension and generation to reasoning and programming-related tasks.

One of the tests that Google gives as an example consists of asking PaLM to guess a movie based on four emojis: a robot, an insect, a plant and the planet Earth. Among all the options (LA Confidential, Wall-E, León: the professional, BIG and Rush), the AI ​​chooses the correct one: Wall-E.

In another, he is asked to choose from a list of words two that are associated with the term “stumble” and is also correct in selecting “fall’” and “stumble”.

AI is also capable of solving simple mathematical problems and even explaining a joke by contextualizing and breaking down the elements that appear in it to make sense of it.

Finally, Google points out that PaLM is capable of programming by translating code from one language to another, as well as writing code based on a natural description of the language, and clarifies that it is capable of solving compilation errors.

(With information from Europe Press)

:

Back to top button