So, you jumped on the AI action figure trend, huh? It’s fun, we get it. But sharing your likeness and personal details, even for a goofy digital doll, isn't entirely risk-free. While the AI model itself probably won't turn evil, the data you feed it—and especially the images you share online—can become ammo for bad actors using AI to craft way more convincing scams.

And they're definitely using it. According to a stark new report from CETaS (the UK's Centre for Emerging Technology and Security), we're seeing a "substantial acceleration" in AI-enabled crime. Think financial fraud, phishing, romance scams, and even worse stuff than that (if you can believe it).
The party line used to be "AI won't take your job, but someone using AI will." Turns out, that applies double to the underworld.
Below, we break down three reports on how scammers are using AI to scale their operations. Not to freak you out, but to prepare you for the world we now live in.
AI: The Scammer's New Favorite Productivity Tool
Remember when phishing emails were easy to spot thanks to terrible grammar and Nigerian princes? AI largely fixed that. Microsoft's latest Cyber Signals report confirms that AI is drastically lowering the technical bar for cybercriminals. It's making it easier and cheaper to generate believable fake content at lightning speed.
How?
- Hyper-Personalized Lures: AI scrapes the web for info about you or your company, building detailed profiles to make social engineering attacks eerily accurate.
- Instant Fake Businesses: Forget needing web dev skills. Scammers now use AI to spin up entire fake e-commerce sites in minutes (down from days/weeks!), complete with AI-generated product descriptions, photos, convincing customer reviews, and even AI chatbots to handle complaints and stall victims.
- Deepfakes & Voice Cloning: Authentic-looking fake websites, deepfake videos (like the CFO in the infamous Arup heist), and cloned voices make it incredibly hard to tell real from fake.
This isn't some future threat; it's happening now, globally. Microsoft sees significant AI-powered fraud activity originating from places like China and major e-commerce hubs like Germany.
The £20 Million Deepfake and Other Nightmares
The potential damage is staggering. Take the case CETaS highlighted: a finance worker at the engineering firm Arup in Hong Kong was tricked into transferring £20M ($25M USD) after a video call with a deepfake of the company's Chief Financial Officer. The call used synthetic audio and likely a phishing email as well, showing how criminals layer AI techniques with old-school methods.
This capability to mimic trusted figures preys directly on human vulnerabilities. And the financial sector is bracing for impact. Deloitte projects AI could balloon US fraud losses to $40B by 2027, up from $12.3B in 2023. A recent survey found over half (51%) of organizations lost between $5M and $25M last year due to AI-driven threats.
It’s not just big corporations getting hit:
- Job Scams: AI generates fake profiles, realistic job postings (often "too good to be true"), and conducts AI-powered interviews to boost credibility. Pro Tip (from Microsoft): Real recruiters won't ask for payment, use personal Gmail accounts (vs. company domains), or insist on WhatsApp interviews. Job platforms need better employer authentication (like MFA).
- Romance Scams: Fraudsters use LLMs to scale outreach and deepfakes to build trust over time, blending automation with human oversight for maximum manipulation. That friendly stranger sliding into your DMs? Assume bot until proven otherwise.
- Tech Support Scams: Even legitimate tools get abused. Microsoft detailed how cybercriminals (like Storm-1811) impersonated IT support using voice phishing (vishing) to trick victims into granting remote access via Windows Quick Assist. While that specific attack didn't use AI, AI tools do make gathering info for such social engineering easier.
Package Hallucinations: A New Threat for Devs
End-users aren't the only ones at risk. A fascinating paper from researchers at UT San Antonio and others uncovered a completely new threat vector targeting software developers: Package Hallucinations. Or as someone hilarious coined it: "slop-squatting."
Here’s the deal: Code-generating language models like ChatGPT, Copilot, and others sometimes "hallucinate" and recommend installing software packages (libraries needed for code to run) that don't actually exist.
Why is this bad? An attacker can:
- Monitor LLM outputs or just guess common hallucination patterns.
- See the LLM suggest installing fake package super-duper-utility.
- Quickly register super-duper-utility on the official repository (like PyPI for Python or npm for JavaScript) and upload malicious code.
- The next unsuspecting developer prompted by the LLM installs the now-real, but malicious, package. Boom, supply chain attack.
The scale is alarming. The researchers tested 16 popular LLMs, generating over half a million code samples.
- They found over 205,000 unique hallucinated package names.
- The average hallucination rate was over 21.7% for open-source models and 5.2% for commercial ones (like GPT). Still way too high!
- These aren't usually simple typos; many hallucinated names are significantly different from valid ones.
- Hallucinations are often persistent within a single model (repeated >50% of the time in tests) but unique between models (81% appeared in only one model).
This is a potent, novel package confusion attack enabled entirely by LLMs. Simple defenses like checking if a package exists are useless if the attacker has already published it.
Why is This Happening?
Several factors fuel this AI crime wave, according to the reports:
- Accessibility: Powerful AI, including open-weight models from places like China with fewer guardrails, is readily available and exploitable.
- Criminal Innovation: Bad actors are agile, unconstrained by ethics, and actively weaponizing AI. They attack AI systems themselves (jailbreaking prompts, removing safety filters – think WormGPT/FraudGPT) to facilitate crime.
- Knowledge Cutoffs: LLMs don't know things that happened after their training date. This partially explains higher hallucination rates for newer coding topics/packages. The sheer cost of retraining massive models makes keeping them constantly updated impractical.
- The "AI vs. AI" Arms Race: While criminals adopt AI for offense, defenders are racing to use AI for defense.
Defenses and Strategies for Fighting Back...
The situation isn't hopeless. Companies like Microsoft are actively fighting back:
- AI-Powered Defense: Using machine learning to detect and block fraud attempts at scale (they thwarted $4B in attempts last year and block ~1.6M bot signups per hour).
- Product Safeguards: Building in protections like Edge's typo/impersonation detection and Scareware Blocker, Quick Assist scam warnings, and Digital Fingerprinting to spot suspicious remote sessions.
- Secure Future Initiative (SFI): Mandating "Fraud-resistant by Design" principles, requiring fraud assessments during product development.
- Collaboration: Working with law enforcement and groups like the Global Anti-Scam Alliance (GASA) to share intel and disrupt criminal infrastructure.
The Package Hallucination paper also explored mitigations:
- Retrieval Augmented Generation (RAG): Giving the LLM relevant info about valid packages alongside the prompt reduced hallucinations.
- Self-Refinement: Asking the LLM to check its own output for fake packages worked well for models good at self-detection (like DeepSeek).
- Fine-tuning: Retraining models specifically on valid package data drastically cut hallucinations (down to ~2.4%!) BUT significantly hurt the quality/correctness of the generated code. Ouch, that trade-off.
CETaS also emphasized the need for law enforcement to rapidly adopt AI tools themselves and recommends a dedicated AI Crime Taskforce (within the UK's NCA and coordinating internationally via Europol) to track trends, map criminal bottlenecks (like access to compute or specific expertise), and develop disruption strategies.
What About You? Stay Vigilant, And Be VERY Skeptical
Ultimately, technology alone can't solve this. Human awareness and skepticism are crucial lines of defense.
- Verify Everything: Don't blindly trust AI outputs, whether it's a package recommendation, a job offer, or investment advice. Double-check legitimacy through official channels.
- Guard Your Data: Be mindful of what personal info and images you share online and feed into AI tools. Assume anything public can be scraped.
- Spot the Red Flags: Learn the signs of common scams (urgency, requests for payment/personal info, unprofessional communication). Microsoft offers good consumer protection tips.
- Use Security Features: Enable MFA wherever possible (requiring a text and email to log in), keep software updated, use browser security features.
- Report Suspicious Activity: If you encounter a scam, report it (e.g., to Microsoft at microsoft.com/reportascam, or relevant platforms/authorities).
AI is an incredible tool, but like any powerful technology, it's being wielded by both good and bad actors. Staying informed and cautious is the best way to navigate the digital landscape in 2025. And maybe think twice before sharing that AI-generated action figure everywhere...