Southeast Asia has become the epicenter of cybercrime syndicates, who laundered more than $37 billion last year through romance-investment schemes, crypto fraud, money laundering and illegal gambling, according to a new UN report.
Cyber criminals in countries like Myanmar, Cambodia and Laos are increasingly using malware, generative artificial intelligence and deepfakes to carry out scams. The report by the United Nations Office on Drugs and Crime found,
According to the report, “The threat landscape of transnational organized crime in Southeast Asia is evolving faster than at any previous point in history.” First cited by Fortune,
Major organized crime groups have increasingly used less-regulated online gambling platforms and virtual asset service providers (VASPs) to move billions of dollars stolen into the financial system.
“Organized crime groups are uniting and exploiting vulnerabilities, and the emerging situation is rapidly outstripping the ability of governments to stop it,” Massoud Karimipour, UNODC Regional Representative for South-East Asia, said in a statement. ”
“Taking advantage of technological advances, criminal groups are producing larger scale and harder to detect fraud, money laundering, underground banking and online scams.”
Organized crime groups have trafficked thousands of people into Southeast Asian countries and forced them to work in hotel and casino scam centers, According to a Bloomberg report,
These scam centers have emerged around the world, according to Kimberly Sutherland, vice president of fraud and detection strategy at LexisNexis Risk Solutions.
Sutherland told The Post that the techies at these scam centers are able to speak multiple languages and can converse with victims in their respective languages, making the scam appear more legitimate.
As AI technologies have expanded rapidly, so have AI-powered crimes.
The UN report found that mentions of deepfake-related content in monitored underground markets and cybercrime groups in the region increased by 600% in the first half of 2024.
Sutherland said that during the pandemic, human-initiated cyberattacks declined and bot attacks increased.
But human-initiated attacks have increased again thanks to advanced AI technologies, he said.
“We think this is highly correlated with AI-powered attacks because AI-powered attacks have made it easier to make a scam look legitimate and more difficult for businesses and humans to detect it,” Sutherland told The Post. Is.”
Irina Tsukerman, president of security strategy firm Scarab Rising, told The Post that low barriers to entry, a lack of effective government regulations and growing Chinese investment in Southeast Asia’s cyber sector have exacerbated the growing problem.
While individuals with low levels of cyber literacy are typically “easy prey” for scammers and digital natives are less able to identify scams, artificial intelligence has the potential to change this trend by creating more concrete plans, says Zuckerman. Said.
For example, in the workplace, deepfakes and voice cloning can mimic images and voices of one’s boss, Zuckerman said.
These same technologies can copy government reports or recreate signatures on client projects.
“To avoid falling prey to increasingly street-savvy cybercriminals, corporations and governments must educate the public about these methods and provide them with training videos and scenarios to identify red flags,” Zuckerman told The Post. Should be encouraged.”