An artificial intelligence software mimicked the voice of a CEO so well that it fooled an employee into transferring $243,000 to the criminals' bank account. The CEO of a U.K.-based energy firm believed he was speaking to the CEO of his firm's parent company, reports the Wall Street Journal. The employee told police that he recognized the "slight German accent and the melody" of the CEO's voice.
The funds were quickly swept from a Hungarian bank account to Mexico and other locations. The criminals attempted a second and third transfer request, which raised suspicions and were not completed by the employee. No suspects have been identified.
So called "deepfakes" are created using advanced machine learning or artificial intelligence software with the goal to fool the human senses. While deepfakes are not new, this is reported as the first cybercrime in which criminals clearly used artificial intelligence to execute the crime.
In February 2019, a nonprofit research organization dedicated to "safe artificial intelligence" shut down a text generation artificial intelligence because it performed too well in creating deepfake news stories. In July 2019, Virginia was the first state to impose criminal penalties for nonconsensual sexual imagery created using technology like a "deepfake."
This incident, although currently unusual, highlights the need to rework internal safeguards and policies as technology evolves and maintain a response plan for when a breach occurs. Recognizing someone's voice may no longer be sufficient to adequately verify identity for a business transaction.