Tech News

Artificial intelligence ChatGPT can be used to create malicious code

Using artificial intelligence, criminals can easily create malicious code.

ChatGPT It is one of the most outstanding artificial intelligences at the moment, due to the ease it has to create texts with basic indications. Something that caught the attention of cybersecurity experts, who point out that the program can easily produce malicious code.

Check Point Research was in charge of the investigation and used ChatGPT Y CodexOpenAI tools, to design malicious emails, scripts and a complete infection chain.

They did this with reference to the fact that in forums of the Dark Web Attack codes, which were created through artificial intelligence, are being spread.

“ChatGPT has the potential to significantly alter the cyber threat landscape. Now anyone with minimal resources and zero knowledge of code can easily exploit it,” he said. Manuel Rodriguez, the company’s security engineering manager for Latin America.

It may interest you: Almost a million data from Sanitas users are exposed on the internet

Using artificial intelligence, criminals can easily create malicious code.
Using artificial intelligence, criminals can easily create malicious code.

AI to create cyber attacks

For the investigation, those in charge had the objective of designing a code that will allow them to have remote access to other computers. Something they got.

Using both platforms artificial intelligence created a phishing email, with a document of Excel containing malicious code capable of downloading reverse shells, which is the remote connection method for the attack.

With the code, he had the ability to impersonate a hosting company and produce a malicious VBA in an Excel document, which is a proprietary programming language of Microsoft for this calculation app.

On the other hand, thanks to the content generated with the AI Codex were able to run a reverse shell on a machine with windows to connect to a specific IP address and run a full scan of an external computer remotely.

What consolidates a complete attack package for a cybercriminal, which is at hand in a tool that is free and in a process that only needs the appropriate indications to obtain a result that can help to spread computer attacks, which in 2022 increased 28%.

A situation that puts the potential of artificial intelligence on alert, in tools such as ChatGPT Y Codex, that have also been used for positive ideas, but in the wrong hands they can cause a lot of damage.

“It is easy to generate malicious emails and code. I believe that these AI technologies represent another step forward in the dangerous evolution of increasingly sophisticated and effective cyber capabilities,” Rodríguez warns.

It may interest you: How long does it take a cybercriminal to guess my email or network password?

Using artificial intelligence, criminals can easily create malicious code.
Using artificial intelligence, criminals can easily create malicious code.

ChatGPT data could be terminated

An investigation of Epoch AIan organization that studies the development of artificial intelligence, ensures that 2026 is the maximum year for which the current high-quality data centers are designed, which gather the data for the creation of content from these technologies.

This generates an alert on platforms such as ChatGPT, DALL E 2 and Midjourneywhich use that set of information to generate their content via text and machine learning.

The collection of information for these data set it is done publicly and on a large scale so that the platform learns correctly. In addition, there are humans involved in the process because there is an important filter to ‘clean’ the data manually and to respond appropriately to user requests.

Those in charge of the study assure that this is a slow and expensive process, and although there are tools, such as the artificial intelligenceusing them to review the models carries a high risk rate that can make the process even more complicated.

Keep reading:

Related Articles

Back to top button