Meta expands facial-recognition tools in UK, EU & South Korea to fight impersonation scams
Meta has launched facial-recognition-powered safety features on Facebook in the UK, EU and South Korea to detect and remove accounts impersonating public figures; Instagram rollout will follow in the coming months. Public figures in Europe must opt in to the program, and Meta says the system compares the profile photo on a suspicious account to the public figure’s existing Facebook/Instagram profile pictures and removes matches.
Key points
- Where: Facebook live now in the UK, EU and South Korea; Instagram coming soon.
- How it works: Facial-recognition compares profile photos and deletes confirmed impostor accounts; public figures opt in in Europe.
- Use cases: Prevent “celebrity bait” ads, stop scammers posing as public figures to solicit money, and help account recovery for hacked profiles.
- Privacy & history: Meta previously shut down broad facial recognition on Facebook after public backlash; the company says these tools are limited, optional and the facial data is deleted after use.
- Impact: Meta reports a 22% global drop in user reports of “celebrity bait” ads in H1 2025 after earlier rollouts.
Quote
“We’ll now use facial recognition technology to compare the profile picture on the suspicious account to the real public figure’s Facebook and Instagram profile pictures. If there’s a match, we will remove the impostor account,” a Meta spokesperson said.
Context & concerns
Facial recognition remains controversial. Supporters say it can reduce scams and speed account recovery; critics warn about privacy, misuse, and mission creep. Meta says participation is optional for public figures in regions where opt-in is required and that the tech is narrowly scoped to this anti-impersonation purpose.
Sources & further reading
- TechCrunch — Meta brings anti-fraud facial-recognition test to the UK
- Meta blog — Testing tools to combat scams & restore compromised accounts
Originally reported on Engadget.
Discussion
Do you think this targeted facial-recognition use is a net positive for platform safety, or does it raise too many privacy risks? Share your thoughts below.
