Chatbot “harassment” complaints fuel regulatory debate

Currently, the world is stunned and relies more and more on the recent developments that have been made known in the Information and Communication Technologies (ICT) industry, especially on issues related to artificial intelligence and its arrival in the world of computers. chat. However, this technology apparently is not being as infallible as it is believed, since complaints are growing over time.

Users of the American conversational robot Replika they were seeking companionship, some a romantic bond, or even exchanges with a sexual overtone. But, in the last year, complaints abound from people who received images that were too explicit or who felt sexually harassed.

Users of the American chatbot Replika were looking for companionship, some for a romantic bond or even for exchanges with a sexual overtone. – Photo: Getty Images

Last Friday, the Italian Data Protection Agency expressed concern about the impact on the most fragile people and prohibited Replika from using the personal data of Italians, stating that it goes against the General Data Protection Regulation (GDPR).

Replika was contacted by AFP but did not respond to a request for comment. This case shows that the European regulation – which is a headache for tech giants who have been fined with billions of dollars for violating it – could become an enemy of the artificial intelligence that generates content.

The Replika bot was trained based on a version of the GPT-3 conversational model from the OpenAI company, which was the creator of chatGPT. This Artificial Intelligence (AI) system uses information from the internet to generate coherent answers to user questions.

See also  How many years should it last and how to extend its useful life?

This technology promises a revolution in Internet searches and other domains in which it can be used. But experts warn that it also represents risks that make it necessary to regulate it, which is difficult to implement.

AI and ChatGPT will transform marketing practices in economic sectors in the coming years. – Photo: datawifi

tension rises

Currently, the European Union is one of the institutions that debates how to regulate this new technological tool. The draft standard “AI Act” could be finished by the end of 2023 or the beginning of 2024.

“We are discovering the problems that AI can pose. We have seen that chatGPT can be used to create chat messages. phishing (fraud on-line) very convincing or to process a database to trace the identity of a specific person,” Bertrand Pailhès explained to AFP.who runs a new AI division of the French regulatory authority CNIL.

Lawyers highlight the difficulty in understanding and regulating the “black box” on which AI reasoning is based.

We are going to see a strong tension between the GDPR and the AI ​​models that generate content”, German lawyer Dennis Hillemann, who is an expert in the sector, told AFP.

IT engineer in the data center, data, statistics, figures
Currently, the European Union is one of the institutions that debates how to regulate this new technological tool. – Photo: Getty Images/iStockphoto

“Neither the AI ​​Act regulation project nor the current GDPR regulations can solve the problems that these AI models are going to bring,” he stated, and in addition to this it will be necessary to rethink the regulation “in terms of what the AI ​​generating models they can really do,” he said.

See also  Can an artificial intelligence like ChatGPT detect early signs of depression?

urgent changes are needed

The latest OpenAI model, GPT-4, is coming out soon with a way of working that can get even closer to the capability of a human being. But still this type of AI makes a lot of mistakes when it analyzes the facts, sometimes shows that it is biased or can make statements that are defamatory, which is why many say it should be regulated.

Free expression expert Jacob Mchangama disagrees with this approach, arguing that “even if chatbots do not have the right to free expression, we must remain vigilant against governments unfetteredly suppressing artificial intelligence.”

An artificial intelligence can already imitate the voice of people.
An artificial intelligence can already imitate the voice of people. – Photo: Getty Images/iStockphoto

Dennis Hillemann points out that it is vital that there is transparency. “If we don’t regulate this, we’re going to enter a world where you can no longer tell the difference between what was done by people or by AI,” he explained. “And that would profoundly change us as a society.”

*With information from AFP.

You may also like...