So you can clone three seconds of your voice with artificial intelligence

Artificial intelligence continues to advance by leaps and bounds, allowing users to perform different activities, such as translating texts or predicting future words.

However, there is also the possibility of cloning the voice with just three seconds. According to the portal computer todaythere are two reliable ways to achieve this. The first is VALL-E, artificial intelligence developed by Microsoft. The other option is ElevenLabs.

As for VALL-E, it allows you to imitate a person’s voice, which includes their emotion and intonation after three seconds of training. However, the AI ​​is in the production phase.

Artificial intelligence. – Photo: Getty Images

VALL-E is composed of the startup technology of OpenAI, the creator of ChatGPT. It is also intended to implement its Bing engine and Office suite.

For its part, ElevenLabs can already be used by users, For this reason, the following steps must be carried out:

  • Register on the official website, clicking on ‘Sing Up’.
  • Click on the ‘Voice Lab’ section and then on ‘Voice Cloning’.
  • Click on ‘Add Instant Voice’ and record what you consider necessary.
  • The platform allows you to record one minute of audio.
  • To name the file, the user can click on the ‘Name’ section.
  • Press the ‘Edit’ tool and then write the text you want to reproduce.
Artificial intelligence.
Artificial intelligence is growing by leaps and bounds. – Photo: Getty Images

It is important to mention that this system is optimized for the English language, that is, the system can recognize half the voice.

Five ways cybercriminals use artificial intelligence to steal it

Due to the large number of functions that artificial intelligence has, various cybercriminals take advantage of it to steal or scam users.

See also  5 ways cybercriminals use artificial intelligence to steal it

phishing attacks

Criminals can use the chatbot as a tool to generate malicious emails that are much more persuasive, since artificial intelligence has the ability to generate a text with fewer misspellings and better consistency than the malicious emails people often receive.

This situation would make fake emails that impersonate recognized brands more credible by being better worded. So users should be much more cautious when reviewing emails that seem suspicious, therefore, it is key not to enter the links that accompany these communications, or download the attached files they have.

Identity Theft

As with malicious emails, cybercriminals can use artificial intelligence to create scams that attempt to impersonate well-known brands (banks, online platforms, etc.). streamingwarehouses, services on-line). In this way, criminals try to steal personal information from victims to access their accounts.

Create malicious code

This platform can be used for software development, since it can be used as a tool to generate code in various programming languages. Despite the fact that ChatGPT warns about the risks of this type of request and points out that ethical standards could be violated, together with the service policy, some users have managed to get that code along with a detailed explanation of how it works.

Getty Creative
Cyber ​​criminals take advantage of artificial intelligence. – Photo: Getty Images/iStockphoto

malicious chat

ChatGPT has an API that allows it to feed other chats, thanks to which this technology can be used in other messaging platforms, like WhatsApp, but it can also be used for malicious purposes. For example, generating chats to scam people using more credible information.

See also  Why will the FTC sue the purchase of Activision Blizzard?

Automate processes in cyberattacks

Before proceeding with an attack, cybercriminals typically run a reconnaissance process that incorporates several repetitive tasks. However, ChatGPT could take care of those activities, making it easier to design and implement a hack against an entity.

You may also like...