The machines of misery. The underbelly, the cesspool of artificial intelligence.

AI scam factories force trafficked workers to defraud global victims. “Every day, for eight months, I deceived people”. Americans alone lost $12.5 billion last year to these sophisticated online cons.

28 May 2025 – – Our misplaced faith in AI has turned the internet into a cesspool of misinformation and spam. We constantly talk about conspiracy theories, spam, and misinformation online that will only get worse with the advent of AI chatbots. And wait until Veo 3 gets into full swing (see my postscript).

But there is a worse cesspool:

• Young Indonesians applying for tech jobs via Facebook and Telegram are trafficked to scam farms

• Scammers use deepfakes, voice clones, and other technologies to dupe victims around the world

• Americans alone lost $12.5 billion last year, mostly to investment scams

These are the types of stories that “Rest of the World” has been pursuing over the last 10+ years. A few quotes from a recent article:

“Rest of World” interviewed seven former scammers in Indonesia to learn how they were lured into scam farms and about the technologies they used to defraud victims. The workers requested anonymity to protect their reputations in their communities, as there is stigma attached to scam work. 

They recounted having their passports and cell phones confiscated at the scam centers. They said they were paid poorly and could not leave. Under the close supervision of their bosses, they were forced to lurk on social media sites and dating apps to find victims. They said they spoke to victims on Telegram and WhatsApp using AI-enabled tools that generate deepfake videos in real time. They had “investment” targets to meet, failing which, they were sold to other scam centers. 

“What makes human trafficking for the purpose of online scams different from other kinds of human trafficking is the abuse of technology,” Hidayah said.

A 26-year-old IT graduate from West Sumatra ended up in a scam compound in March 2024 after a string of bad luck. He once worked as a freelance front-end developer but found opportunities drying up in IT. Frustrated, he tried his hand at a fruit distribution business, which failed. 

One day, while browsing Facebook, he saw an opening for a search engine optimization specialist at a Singapore-based stock trading company. Following a job interview with a recruiter on Telegram, he was placed at the company’s satellite office in Cambodia and promised a salary of $800 a month.

He did not realize he was trafficked until his passport was confiscated in Phnom Penh, Cambodia, and he was driven to a remote compound protected by armed guards. He worked 15-hour shifts in a call center and had to defraud victims of $40,000 every month, he told “Rest of World”. He was paid less than half his promised salary.

One of his victims was an Indonesian woman, a fitness enthusiast, whom he groomed into a romantic relationship. He persuaded her to bet $10,000 in a casino in Macau, he recalled.

You can read the whole piece by clicking here.

Yesterday my team wrote about the quick advance of the agentic AI web. We are seeing the shift from raw AI horsepower to systemic integration. Models are being wired into feedback loops, infrastructure and ecosystems.

As a postscript, they noted Google’s new Veo 3 – an AI model that generates talking video solely from your prompts. Every single one of the following clips was made solely by prompts:

 

Ok, the embedded subtitles/text need work but they’ll fix that.

But its realism adds to new worries about the flood of synthetic media that will be unleashed.

It brought to mind an article by Maura Grossman and Paul Grimm which examines the increasing prevalence of generative AI and deepfakes in society and their inevitable impact on legal proceedings, highlighting the ease with which realistic fake content can be created, democratizing fraud and disinformation. My colleague, Doug Austin, covers the article here.

It also brought to mind a new AI technology I saw earlier this year that can scan email threads and delete incriminating and/or proprietary information. Obviously you may not be able to effect deletion in other email chains that have that email. But they are now combining their technology with an “email recall” function and a data deduplication technique, so who knows.

Yep. Generative AI and other forms of AI are going to have a heavy impact on legal proceedings, and my legal practitioner brethren.

NOTE: my team interviewed Maura Grossman at Legalweek this year. We are somewhat behind on video edits/post-production work but we should have her interview out next week.

Leave a Reply

Your email address will not be published. Required fields are marked *

scroll to top