According to the Washington Post, several couples sent money to their children who were seeking help for various reasons, as any sensitive family would. However, there was a problem that they were not aware of; The person they were talking to was an artificial intelligence, not their children.
Fraud with artificial intelligence generated voice
For example, Microsoft’s Vall-E tool enables very realistic and consistent sound imitations with only 3 second sound clips. Moreover, it can even produce different tones (angry, excited, etc.) from this small sound clip. A similar tool, ElevenLabs, can produce much more realistic sounds. You can see the success of the vehicle in the Twitter video just above.
At this point, it seems necessary to take comprehensive measures for this type of fraud or abuse. It should not be viewed as just a scam. Deepfake videos combined with such generated sounds can have serious consequences, especially in times of confusion. Banks’ voice-based security features can be circumvented. Therefore, social media platforms may include tools that detect artificial intelligence in the videos or images uploaded to them. Models are currently being developed, such as DetectGPT, which detects text-generating artificial intelligence such as ChatGPT.