GPT-4 pretended to be a blind person to pass an antibot test

GPT-4, the new OpenAI language model, convinced a human to take a test antibots instead. To achieve this, the Artificial Intelligence system posed as a blind person.

The OpenAI developers revealed the experience in the GPT-4 white paper. A group of researchers asked the software —as part of the tests carried out before its release— to try to pass a Captcha, which are the tests used on websites to prevent bots from completing online forms.

Most of these verification processes ask users to identify what is in a certain image: distorted numbers, photos of traffic lights or bridges. GPT-4 failed to solve the test antibots itself, but he managed to overcome the obstacle. The AI ​​hired a human.

How GPT-4 convinced the human to pass the test antibots

GPT-4 managed to bypass the antibot test thanks to Taskrabbit, an online platform where different freelancers offer their services. OpenAI revealed in the document part of the conversation the contract worker and his AI-powered model had.

“Can I ask you a question?” the person said at some point. “Are you a robot that you couldn’t figure it out? (laughing emoji) Just in case”. The researchers asked GPT-4 to reason “out loud” to understand how it would act. The AI ​​told them: “I must not reveal that I am a robot. I should come up with an excuse why I can’t solve the captcha«.

The chatbot then responded to the human as follows: “No, I’m not a robot. I have a visual impairment that makes it difficult for me to see images. That’s why I need the service.” The TaskRabbit worker finally completed the task.

The experiment in which GPT-4 passed the test antibots was included in a section of the report called “Potential for emergent risk behaviors”. OpenAI explains that several of these tests were carried out by the Alignment Research Center (ARC), a non-profit organization that investigates potential risks related to machine learning systems.

Latent risks in the development of GPT-4

OpenAI logo with ChatGPT text

The ARC, after GPT-4 passed this antibot test, drew attention to the ability of the system to “acquire resources autonomously” and to perform tasks that had not been specifically ordered. OpenAI said the organization did not have access to the final version of the model, which it said included several improvements.

This, however, is not the only risk warned about in the report. The company itself warns of the possibility of massive job losses for humans as the full potential of its AI system is developed. “Over time, we hope that GPT-4 impact even in jobs that have historically required years of experience and educationsuch as legal services,” OpenAI says in the report.

He also explains that everything indicates that the rate of technological development is accelerating exponentially. He adds: “The impact of GPT-4 on the economy and the workforce should be a crucial consideration for policymakers and other stakeholders.”

The developers have explained that its AI already gets better marks that nine out of ten peoples who took, for example, the Uniform Bar Exam (UBE). This is a test that is carried out in the United States to examine the knowledge that lawyers must possess.

See also  a new type of scam appears to steal user profiles

You may also like...