Monday, December 23, 2024
HomeBusinessCEO of WPP falls victim to deepfake scam

CEO of WPP falls victim to deepfake scam



The CEO of WPP fell victim to an elaborate deepfake scam that involved voice cloning the boss to solicit money and personal details from the company’s workforce.

Mark Read, the CEO of WPP, a London-based communications and advertising company whose clients include Dell, Wendy’s, Victoria’s Secret and Coca-Cola, saw his voice cloned and likeness stolen by fraudsters who created a WhatsApp account seemingly belonging to Read.

They were using a publicly available photo of Read as the profile picture to trick fellow users, according to an email explaining the scam and sent to WPP’s leadership earlier reviewed by The Guardian.

WPP CEO Mark Read’s voice and likeness were stolen as part of an elaborate deepfake scam to get the advertising giant’s fellow leaders to hand over their personal details and funds. REUTERS

The WhatsApp account was then used to set up a Microsoft Teams meetings with another WPP executive.

During the meeting, the crooks deployed a fake, artificial intelligence-generated video of Read — also known as a “deepfake” — including the voice cloning.

They also tried using the meeting’s chat function to impersonate Read and target a fellow “agency leader” at WPP — whose market cap sits around $11.3 billion — by asking them to hand over money and other personal details, according to The Guardian.

“Fortunately the attackers were not successful,” Read wrote in the email obtained by The Guardian.

“We all need to be vigilant to the techniques that go beyond emails to take advantage of virtual meetings, AI and deepfakes.”

A WPP spokesperson confirmed to The Post that the attempt at scamming the company’s leadership was unsuccessful.

“Thanks to the vigilance of our people, including the executive concerned, the incident was prevented,” the company rep added.

The scammers reportedly used a photo of Read to set up a WhatsApp account, which was then used to make a Microsoft Teams account to communicate with other WPP leaders while pretending to be Read. diy13 – stock.adobe.com

It wasn’t immediately clear which other of WPP’s executives were involved in the scheme, or when the attack attempt took place.

WPP’s spokesperson declined to provide further details about the scam.

“We have seen increasing sophistication in the cyber-attacks on our colleagues, and those targeted at senior leaders in particular,” Read added in the email, per The Guardian, in reference to the myriad of ways in which criminals can now impersonate real people.

Read’s email included a number of bullet points that he advised recipients to look out for as red flags, including requests for passports, money transfers and any mention of a “secret acquisition, transaction or payment that no one else knows about.”

WPP, a London-based communications and advertising company whose clients include Dell, Wendy’s, Victoria’s Secret and Coca-cola, confirmed to The Post that the scammers were unsuccessful in tricking its executives. AFP via Getty Images

“Just because the account has my photo doesn’t mean it’s me,” Read said in the email, according to The Guardian.

The Post has sought comment from WPP, which includes a notice on its “Contacts” landing page that its “name and those of its agencies have been fraudulently used by third parties.”

Deepfake audio has been on the rise as deepfake images have become a hotly debated topic among AI firms.

While Google has recently moved to distance itself from the dark side of AI, cracking down on the creation of deepfakes — most of which are pornographic — as it deems them “egregious,” ChatGPT-maker OpenAI is reportedly considering allowing users to create AI-generated pornography and other explicit content with its tech tools.

Deepfakes like the graphic nude images of Taylor Swift, however, will be banned.

Deepfakes mostly involve fake pornographic images, with celebrities like Taylor Swift, Bella Hadid and US Rep. Alexandria Ocasio-Cortez falling victim. AFP via Getty Images

The Sam Altman-run company said it is “exploring whether we can responsibly provide the ability to generate NSFW (not-safe-for-work) content in age-appropriate contexts.”

“We look forward to better understanding user and societal expectations of model behavior in this area,” OpenAI added, noting that examples could include “erotica, extreme gore, slurs and unsolicited profanity.”

OpenAI’s foray into creating fake X-rated content comes just months after it unveiled revolutionary new software that can produce high-caliber video in response to a few simple text queries called Sora.

The technology marks a dazzling breakthrough from the ChatGPT maker that could also take concerns about deepfakes and rip-offs of licensed content to a new level.



Source link

RELATED ARTICLES

Leave a Reply

Most Popular

Recent Comments

Зарегистрируйтесь, чтобы получить 100 USDT on Farmer Wants A Wife star Claire Saunders shares urgent warning after ‘shock’ health scare

Discover more from MovieBird

Subscribe now to keep reading and get access to the full archive.

Continue reading