Meta targets celebrity deepfake scams in Australia with FIRE

Meta has launched a new tool, the Fraud Intelligence Reciprocal Exchange (FIRE), to combat celebrity deepfake scams in Australia, following significant financial losses from these sophisticated schemes.

Scams leveraging AI-generated deepfakes have become a growing issue in Australia, particularly as scammers use the likenesses of celebrities to promote fraudulent investment opportunities. 

Meta, in partnership with financial institutions, has introduced the FIRE tool to curb these scams, which have led to more than AUD43 million (approximately US$30 million) in losses from January to August 2024.

The FIRE tool works by gathering data on online scams and has already helped block 8,000 pages and 9,000 celebrity-related fraud attempts on Facebook in just six months. Popular Australian figures such as mining mogul Gina Rinehart, television host Larry Emdur and animal rights activist Robert Irwin have been used in these scams, often with deepfake images that mislead users into trusting bogus investment pitches.

According to Australia’s Scamwatch service, social media scams have risen by 16.5% year-over-year, contributing to a total of AUD93.5 million (roughly US$64 million) in losses. Meta’s new tool is part of a larger effort to mitigate these impacts, working closely with banks and the Australian Financial Crimes Exchange to detect and block fraudulent activity.

David Agranovich, Meta’s Policy Director for Global Threat Disruption, emphasized the importance of collaborative efforts between tech companies and financial institutions, stating, “Once we’ve blocked them, they’ll look for new ways to come back, new ways to get around our defensive, which is why continued information sharing like this is so critical.”

As scam sophistication increases, Meta is urging users to verify suspicious content and remain vigilant online.

Share this Post:

Accessibility Toolbar