A research group from the University of Chicago has created Glaze, a solution that seeks to protect the works made by real artists and thus prevent them from ending up training Artificial Intelligence (AI) models.
In that sense, the AI is capable of creating high-quality digital images via platforms such as DALL-Edeveloped by OpenAI, or Stable Difussion by Stability AI. These programs use machine learning models (‘machine learning’) and are trained using databases of images and illustrations in many cases with artist credits.
Likewise, and due to this possibility of use, authors and creators of artistic works have shown their concern that technology is capable of dehumanizing art; a problem in which the copyrights generated with these works are also involved.
In this scenario, researchers from the SAND Lab group at the University of Chicago have created the Glaze tool, created so that artists can protect their works and their personal style when they publish their art ‘online’, in order to prevent AI from copying and being able to learn his technique.
As they explain on their website, Glaze adds “very small” changes to the original artwork before posting it online. These changes are almost imperceptible to the human eye, so they hardly alter the original artwork. However, they modify the works enough so that the AI models are incapable of copying “the author’s style”.
In this way, these alterations of the work are added to the drawing or painting as if it were a new style layer, and “the ‘software’ disguises the images so that the models incorrectly learn the unique characteristics that define the style of the work”. an artist, thwarting subsequent efforts to generate artificial plagiarism.
As the AI interprets the work in a different style than the original, is unable to create images that reproduce a style identical to that of these artists, which will protect them from copies or reproductions made without their consent.
To verify their results, the developers, led by Neubauer computer science professors Ben Zhao and Heather Zheng, have carried out a series of tests and studies involving more than 1,000 professional artists.
Despite the progress that the appearance of a tool like Glaze represents, the creators emphasize that it is a project that “they are still investigating” and that they will continue updating their tools to “improve their robustness in the face of new advances in art models.” of AI”.

Artificial Intelligence: complaints about “harassment” of a chatbot fuel debate on regulation
Currently, the world is stunned and relies more and more on the recent developments that have been made known in the Information and Communication Technologies (ICT) industry, especially on issues related to artificial intelligence and its arrival in the world of the chat. However, this technology apparently is not being as infallible as it is believed, since complaints are growing over time.
Users of the American conversational robot Replika they were seeking companionship, some a romantic bond, or even exchanges with a sexual overtone. But, in the last year, complaints abound from people who received images that were too explicit or who felt sexually harassed.

The Italian Data Protection Agency expressed concern about the impact on the most fragile people and prohibited Replika from using the personal data of Italians, stating that it goes against the General Data Protection Regulation (GDPR).
Replika was contacted by AFP but did not respond to a request for comment. This case shows that the European regulation – which is a headache for tech giants who have been fined with billions of dollars for violating it – could become an enemy of the artificial intelligence that generates content.
*With information from AFP and Europa Press.