Report: Meta may be earning billions from scam ads — enforcement and accountability questions
A recent report details how scam and illicit ads on Meta’s platforms — including Facebook, Instagram and WhatsApp — could account for a sizable share of the company’s revenue. Internal estimates highlighted by the report suggest scam ads might represent as much as 10% of revenue (roughly $16 billion), though Meta disputes that figure and calls it “overly‑inclusive.”
The coverage raises concerns about how effectively Meta polices bad actors and whether business priorities limit enforcement. Reportedly, small advertisers aren’t blocked until multiple strikes, and bigger spenders have sometimes accrued hundreds of strikes before removal. Executives have discussed avoiding actions that could meaningfully cut ad revenue, illustrating the tension between growth and safety.
Key facts at a glance
- Potential scale: Internal estimates cited in reporting put scam ads at up to ~10% of Meta’s revenue (~$16B), though Meta disputes that calculation.
- Types of fraudulent ads: Fraudulent e‑commerce and investment schemes, illegal online casinos, and banned medical products were among the categories flagged.
- Enforcement issues: Reports describe repeat offenders buying ads repeatedly and inconsistent strike/removal thresholds tied to advertiser size and spend.
- Company response: Meta says it reduced user reports of scam ads by 58% in 18 months and removed millions of pieces of scam content in 2025, while criticizing the revenue estimate as rough.
Why this matters
Scam ads harm consumers directly and erode trust in advertising ecosystems. When bad actors can repeatedly place ads that promote fraud, the platforms that host those ads face legal, regulatory and reputational risk. The scale alleged by the report also raises questions about whether current internal controls and automated systems are sufficient.
Policy, tech and business tensions
- Advertiser moderation requires balancing automated detection, human review and appeals processes — each has cost and accuracy trade‑offs.
- Strict enforcement can reduce revenue in the short term; lax enforcement can expose the platform to lawsuits and regulatory action in the long term.
- Regulators are increasingly scrutinizing platform responsibilities; an outsized dependence on ad revenue from problematic advertisers could invite tougher rules or fines.
What to watch next
- Further investigative reporting and any regulatory inquiries or government statements targeting ad moderation practices.
- Meta’s public disclosures, enforcement metrics, and any changes to ad‑buyer verification or strike policies.
- User and advertiser reactions — will major brands push for tougher safeguards, or will pressure come mainly from lawmakers and consumer advocates?
For the original coverage and a detailed breakdown of the reporting, see the article (opens in a new tab): Engadget — Meta and scam ads.
Discussion: How should platforms balance ad revenue and consumer protection — tougher automated blocks, more human review, stricter advertiser verification, or steeper penalties for repeat offenders? What would you prioritize?
