Meta revives facial recognition technology on Facebook, Instagram to combat ‘celeb-bait’ scams

Meta is set to deploy facial recognition technology in its battle against fake celebrity endorsement scams, also known as "celeb-bait ads."

Scammers often use photos of well-known public figures to create seemingly legitimate ads that lure users to fraudulent websites. These sites typically attempt to steal personal information or money.

Meta’s new approach will compare images in ads to pictures posted on celebrities’ verified Facebook and Instagram accounts. If a match confirms the ad is part of a scam, Meta will block it. The company revealed this plan in a recent blog post but did not specify how widespread the issue is across its platforms.

The introduction of facial recognition comes as Meta, which boasts around 3.3 billion daily active users, continues to rely on AI to enforce its content policies. Although AI has helped manage the overwhelming number of content violations, it has previously led to unintended account suspensions or blocks due to errors.

In addition to cracking down on fake celebrity scams, Meta will also test a new way for locked-out users to regain access to their accounts. Some users will be able to submit a video selfie for identity verification, which will be compared to photos already on the account. Meta promises to delete all facial data immediately after this process, regardless of the outcome.

The move comes amid Meta’s complex history with facial recognition technology. Meta shut down its facial recognition system in 2021 amid privacy concerns and regulatory pressure. The company was also previously sued for using the technology without proper user consent. 

In 2024, it was ordered to pay US$1.4 billion to the state of Texas as part of a legal settlement, and earlier, it paid US$650 million in a separate suit in Illinois. 

To avoid further complications, Meta will not conduct the video selfie test in either Texas or Illinois.

Share this Post:

Accessibility Toolbar