ChatGPT would be used for ‘phishing’ and spreading hate messages

The extension of the use of ChatGPT, one of the latest generative artificial intelligence (AI) developments that has been made available to the public, brings advantages in terms of security such as the automation of routine tasks or the creation of friendly ‘chatbots’, but it can also entail risks such as the use of this technology for the dissemination of hate messages or to carry out Internet fraud such as ‘phishing’.

This is clear from an analysis carried out by Prosegur Research, the company’s forum for reflection and analysis, in which the implications of ChatGPT from a security perspective and identifies the main risks and opportunities that open up with its application to different areas.

This technology is not born aimed at malicious use. – Photo: dpa/picture alliance via Getty I

Social polarization is one of the ten risks detected by the studywhich explains that, due to the ability of generative artificial intelligences to produce multimedia content, they can be used to spread messages of hate or discrimination, as well as messages of a radical or extremist nature.

‘Phishing’, or automated generation of emails that appear to be real in order to deceive users into accessing confidential information or to computer systems, is another of the risks that this technology entails, since its high-quality writing does not raise suspicions.

The generation of false news, “an issue that affects national security, damaging social cohesion and democratic principles,” is another of the points highlighted by the study, which it sees in ‘doxing’, or the dissemination of hoaxes to damage the credibility of organizations, another of the negative points of this AI.

See also  Twitch launches 'Shield Mode' to block hateful chats

The possible leak of information or theft of data, scams and “quality” frauds and the generation of malicious ‘chatbots’ with the aim of obtaining sensitive information or achieve illicit economic purposes are also on the dark side of this technology.

The study also warns of identity theft through ‘deep fakes’ and the ability of this AI to generate text, images and videos and to simulate voice; the generation of malicious codes or the use of this new technology in the geopolitical and geoeconomic power struggle, since “data and technologies are at the center of the configuration of power”.

Creative Getty
This technology is not born aimed at malicious use. – Photo: Getty Images/iStockphoto

Automation of tasks and access to information

However, this technology is not normally born intended for malicious use and can generate opportunities in the security field, such as automating routine tasks in security functions, which facilitates the well-being of employees by eliminating repetitive and tedious tasks, according to the report.

Just as there is a risk of generating malicious ‘chatbots’, he adds, there are others of an “attractive” nature.who enjoy a friendlier and more human profile and who improve interaction with customers and other people.

This AI allows access to huge amounts of information of interest to security in a structured way through the use of natural language, enhancing open source intelligence capabilities (OSINT) and, according to the company, it can be positive in risk analysis. and in the area of ​​pattern and anomaly recognition.

The OpenAI logo is seen on the screen with the ChatGPT mobile website on January 8, 2023 in Brussels, Belgium.  (Photo Illustration by Jonathan Raa/NurPhoto via Getty Images)
This technology is not born aimed at malicious use. – Photo: NurPhoto via Getty Images
See also  Woman who slept in office to be available for Elon Musk was fired; the tycoon took out the creator of Twitter Blue

in intelligenceChatGPT can contribute to the generation of hypotheses, the identification of trends and the construction of scenariosand in the scope of the recommendations “does not replace in any way the work of an international security analyst but supports some tasks”.

It helps in predictive analytics by providing certain predictions with their associated probabilities based on the huge amount of data on which it is based while helping to detect ‘phishing’ and identify vulnerabilities or generate secure passwords.

Generative artificial intelligences also have a learning aspect, according to the study, since they can be a first point for learning about issues related to security, technologies or risks.

*With information from Europa Press.

You may also like...