The Federal Trade Commissionβs 2025 Consumer Sentinel Network report documents $2.1 billion in reported social media fraud losses in the United States, a 47% increase over the $1.43 billion recorded in 2024. The report, published April 28, 2026, attributes the surge to the widespread adoption of AI-generated content in fraud operations β specifically deepfake video impersonations of executives and celebrities, synthetic romance profiles powered by generative AI, and LLM-personalised phishing that adapts messaging to individual target profiles scraped from public social media data.
Key Findings
Investment scams dominate by volume: $1.1 billion (53% of total losses) came from fraudulent investment schemes promoted on social media, primarily AI-generated crypto and trading platform scams. The median individual loss was $4,800 β significantly higher than other fraud categories β because investment scam victims transfer funds voluntarily, which delays recognition and reduces recovery options.
AI deepfakes drove CEO and celebrity impersonation fraud: The FTC identified a 312% year-over-year increase in complaints involving video or audio deepfakes impersonating known individuals, including corporate executives directing employees to transfer funds (business email compromise evolved to video format) and celebrity endorsements for fraudulent investment products.
Romance scam losses reached $640 million: AI-generated personas now operate at scale, maintaining dozens of simultaneous long-term relationships and adapting conversational patterns to individual targets using scraped personal data. The median loss per romance scam victim was $9,200.
Age bracket most affected: Consumers aged 35β54 accounted for 38% of total losses by dollar value β a shift from prior years when older demographics dominated. The FTC attributes this to AI personalisation making scam content more credible to digitally-experienced users who previously self-filtered obvious attempts.
Enterprise and Compliance Implications
The reportβs findings translate into three compliance and risk management considerations for security leaders:
FTC Section 5 exposure for platforms and tools: The FTC has indicated in supplemental guidance that AI tool providers and social media platforms that fail to implement reasonable measures to detect AI-generated fraudulent content may face Section 5 unfair practices enforcement. Organisations operating AI-powered customer communication tools or social platforms should review their content integrity obligations.
EU AI Act Article 50 β transparency obligations: AI-generated content used in commercial communications must be labelled under EU AI Act Article 50, enforceable from August 2026. Organisations using AI for customer outreach, marketing, or support need to ensure their disclosure practices are compliant before enforcement begins. The FTC data underscores that regulators will have high motivation to enforce these rules aggressively.
Social engineering training gap: The FTC data reveals that conventional social engineering awareness training is increasingly insufficient β it teaches users to distrust obviously clumsy phishing attempts, but AI-personalised attacks exploit real social context, match communication styles, and operate over extended timeframes that defeat pattern recognition. Security awareness programmes should incorporate AI-generated social engineering scenarios, including deepfake video recognition.
Recommended Organisational Actions
- Update security awareness training to include AI-specific content: how to verify identities out-of-band when receiving investment or transfer requests via social media or video calls, and how to recognise deepfake video artefacts.
- Review AI Act Article 50 readiness if your organisation uses AI-generated content in any customer-facing communication β confirm labelling obligations are understood and will be met before August 2026 enforcement.
- Establish executive impersonation incident response protocols β define how employees should verify and escalate requests that appear to come from executives via social media, video, or unfamiliar communication channels.
- Brief the board on AI-driven fraud risk β the FTC data provides a concrete business case for investment in fraud-aware identity verification controls for financial transactions above defined thresholds.
Share this article