Tech News

ChatGPT in the dock: An overview of six morally questionable uses

Learn about six ChatGPT AI use cases that raise concerns about its potential to be misused. While AI offers impressive possibilities, it also poses risks in various areas such as cybersecurity, education and media.

1. Creation of malware

L’artificial intelligence ChatGPT has the potential to generate malware in series, making their detection and neutralization more difficult for cybersecurity experts. Indeed, researchers can use this technology to create complex and difficult-to-detect polymorphic programs, thereby exploiting the limitless capabilities of AI.

  • Unlimited potential: AIs do not sleep and can therefore create malware without interruption.
  • Code variations: Researchers can use ChatGPT to create different versions of malware, making it harder to detect or stop.

2. Academic cheating

Another area where using ChatGPT can be problematic iseducation. Being able to generate text on any topic, this tool can be used by students to cheat at school. Teachers have already reported catching students in the act and some schools have even banned the use of the app.

Learning about AI risks becoming an additional tool that young people will have to master at school, with potentially harmful consequences on their education and development.

3. Spam on dating apps

In the field of dating apps, ChatGPT has also been used to automate conversations with potential partners. While this isn’t necessarily scary in itself, knowing that you might be interacting with a computer program rather than a real person can be unsettling.

  • Automation: Some users use ChatGPT to chat with their matches on Tinder or similar apps.
  • Disruption: the prospect of interacting with a robot rather than a potential partner can be destabilizing for some users.

4. Threat to the professions of journalism and writing

THE journalists and editors should they fear for their jobs? The growing use of ChatGPT raises questions about the future of professions related to the production of written content. Indeed, this AI is capable of quickly generating text on any subject, which could threaten jobs in this field.

5. Phishing and other scams

Although it is difficult to prove its existence, a tool like ChatGPT would be perfect for running campaigns. phishing. Phishing messages are often easy to recognize due to rough language, but with ChatGPT this would no longer be the case. Experts have already warned that AI could be used in this context.

6. Deceive recruiters

Finally, ChatGPT could be used to fool recruiters during the job search process. A recent study showed that AI-generated answers were better than those of 80% of human candidates. AI is able to use the keywords searched for by recruiters and thus more easily pass the filters set up by human resources departments.

  • Deceptive potential: The use of AI can distort the quality of applications and give an unfair advantage to some applicants.

Related Articles

Back to top button