Scams have proliferated on social media and Asia has suffered a plague of it. The surge in social media-enabled scamming has led to authorities calling on the companies operating them to step up their responsibilities.
Meta has been ordered to expand measures to stem Facebook impersonation scams. The Singaporean authorities first implemented a directive in September 2025 that required Meta to bring in measures including facial recognition to curb Facebook scams.
It was the first enforcement action under the Online Criminal Harms Act (OCHA). The directive proved somewhat effective with the Singaporean police detecting a decrease in Facebook incidents that involve impersonation of government officials. But the scammers have adapted.
The scammers have “pivoted to impersonate people not covered under the first implementation directive,” the Ministry of Home Affairs (MHA) said in a press release. Now, the country’s police have implemented a second directive under OCHA.
Meta must now strengthen facial recognition measures, targeting scam ads and impersonation of public figures. Meta must apply enhanced facial recognition to detect impersonation of government office holders by January 31.
By February 28, the measures must extend to individuals identified by police as being at high risk of impersonation, including those for whom police reports have been filed. By June 30, Meta is required to roll out facial recognition protections for notable Facebook users in Singapore, with implementation taking place in phases.
The measures are intended to reduce scam advertisements, accounts and business pages impersonating officials and other high-risk individuals. Failure to comply could result in fines of up to S$1 million (US$791,000). Continuing violations may incur additional penalties of up to S$100,000 per day.
Meta said it is working with the Singapore government to address impersonation and “celeb-bait” scams, citing a global decline of more than 50 percent in user reports of scam ads between June 2024 and October 2025. The company added that it uses a “multi-layered defence” combining automated detection and facial recognition technology.
Meta has been exploring the use of facial recognition to prevent scammy ads on its platforms for over a year. In late 2024, it rolled face biometrics to detect and prevent celeb-bait ads in a number of jurisdictions. Celeb-bait ads is a scam where images of famous people are used to bait users into engaging with ads. These ads then lead to scam websites that ask for personal information or to send money.
When responding to the initial directive from the Singaporean authorities, Meta was quick to point to its policies. It reiterated that impersonation and deceptive use of public figures in ads violate its policies. That it removes such content when detected.
Meta is now drawing criticism in India following its actions — or lack thereof — in handling an ad that features a deepfaked Droupadi Murmu, the President of India.
Meta reportedly declined to remove a deepfaked advert featuring Murmu, which was advertising for investment on Facebook, even after it was reported by a senior police officer from Andhra Pradesh police, according to The 420.
After the deepfaked video ad was reported to Meta, the content was not removed with Meta claiming it did not violate Facebook’s Community Standards in its response. Screenshots of Meta’s support response were shared online. The fraudulent ad was strategically posted it seems, as it appeared on India’s Republic Day, when the prominence of the country’s president naturally rises.
Meta’s actions drew fierce criticism. In an editorial, MediaNama called Meta’s “refusal” to act a reflection of how often enforcement “collapses when harm sits uncomfortably close to engagement or revenue.”
“What the deepfake advertisement impersonating the Indian President exposes is not just a failure of moderation, but a deeper unwillingness by platforms to confront the predictable misuse of generative AI,” the op-ed argued.
“In effect, platforms continue to treat impersonation as a policy edge case rather than an integrity breach.”
Meta’s refusal was shared by Dr. Fakkeerappa Kaginelli, the senior Indian police officer, on LinkedIn. In his post, Kaginelli said: “When fake AI content impersonation the highest constitutional authority of the country fails to meet the threshold for action, it raises serious concerns about the effectiveness — and intent — of these standards.”
“When profit comes before public safety and national dignity, such outcomes become inevitable,” his post concluded.
These concerns have been growing across the region. In November, Thailand’s Digital Economy and Society minister, Chaichanok Chidchob, urged messaging and social media platform providers to increase their sense of responsibility. “Global platforms must play a stronger role in protecting Thai users — not just by providing services, but by sharing responsibility for preventing cybercrime,” the minister said.
In Vietnam, the country’s National Cybersecurity Association (NCA) has warned of increasingly sophisticated online scams, with criminals using deepfake technology to impersonate people and organizations. The NCA has partnered with TikTok Vietnam, with creators and influencers leveraged for an anti-scam campaign, reports Vietnam Plus.
Vu Ngoc Son, head of technology at NCA, said: “Online scammers are constantly looking for new technologies and even experimenting with new tactics to bypass detection.”
In its Semiannual Adversarial Threat Report, Second and Third Quarter 2025, Meta revealed that Cambodian scam networks use genAI to impersonate government officials and police to defraud people across Asia. These networks had grown both in size and sophistication and AI tools had enhanced scam efforts. Stolen and fabricated identities increase credibility and automation expands operations.
These threats are drawing regulatory attention even as Meta appears all-too aware of the issue. In South Korea, officials have said they will ramp up removal of deceptive adverts that use deepfaked celebrities and fabricated experts on social media.
From this year, AI-generated photos or videos will be required to be labelled as such in South Korea, with offenders who don’t comply subject to fines. Lee Dong-hoon said such ads are “disrupting the market order,” reported Associated Press.
Neighboring Japan also took issue with a deluge of scammy ads on Facebook and Instagram. Wary that it would be forced to verify the identity of all its advertisers, Meta began mass enforcement to stem the volume of the problematic ads. But it also reportedly made such ads harder to see for Japanese regulators, according to an investigation by Reuters.
Referring to internal documents, Reuters highlighted a “search-result cleanup” Meta reportedly uses that effectively removes fraudulent adverts from the searches that regulators use to discover them. Reuters reported that the tactic proved so successful for Japan, Meta added it to a “general global playbook” for markets where it faces increased scrutiny from policymakers. These include the U.S., Europe, India, Thailand, Australia and Brazil.
Why it does this comes down to money. In a series of investigations, Reuters reported that “high risk” scam ads generate as much as $7 billion in revenue each year for Meta. The advertising business for Meta is extremely lucrative. And its China ad business was “thriving” according to the news agency, reaching more than $18 billion in annual sales in 2024, over 10 percent of Meta’s global revenue.
In China citizens are unable to access Meta’s social media platforms, which are blocked, but Chinese companies are allowed to advertise their business to overseas consumers. However, Meta discovered that about 19 percent of its Chinese ad revenue, more than $3 billion, was derived from adverts promoting scams, illegal gambling, pornography and other banned content, Reuters reported.
Meta took a “reactive only” approach, acting only when laws compelled it to. Internal documents also showed that when Meta did take action scammers simply directed the blocked ads to other countries where rules or enforcement were weaker.
Singapore remains one of the few jurisdictions where Meta has been strong-armed into acting. Another is Taiwan where regulators passed a law mandating platforms to verify advertisers. After seeking delays, Meta complied after it faced fines of $180,000 for each unverified scam ad. Taiwanese officials reported a 96 percent drop in investment scam ads and a 94 percent drop in identity impersonation scams.
Meta also set up a special anti-scam team focused on China, with enforcement tools having an effect, but which was then disbanded, apparently on instructions directly from Mark Zuckerberg, according to a late 2024 document cited by Reuters.
Meta’s documents indicate the company knows that scam activity would be lessened by adopting universal verification, which it could implement in as little as six weeks, but that it would rather not pay the costs of that implementation — around $2 billion. Meta earned revenues of $164.5 billion in 2024, mainly from advertising.
AI fraud | APAC | biometrics | deepfake detection | deepfakes | Facebook | facial recognition | Meta
Bipartisan legislation newly introduced in U.S. Congress aims to “strengthen America’s digital identity infrastructure and protect individuals, businesses, and government…
When John Howard began working for SAIC at the Maryland Test Facility, biometrics was an emerging technology supported by a…
Evaluating the performance of AI is clearly an area with major growth potential. A new entrant to the area, Sensus…
Atlanta-based Trust Stamp has issued a business update, highlighting progress the firm made in the first month of 2026. Trust…
As Australia daily moves further into its era of age restrictions on large social media platforms, various stakeholders are offering…
Northern Ireland may soon follow England and Wales in allowing police to use facial recognition to generate investigative leads based…
This site uses Akismet to reduce spam. Learn how your comment data is processed.
Continue Reading
Learn More
Copyright © 2026 Biometrics Research Group, Inc. All Rights Reserved.
Web Design by Studio1337