AI-Forged Military IDs: Inside Kimsuky’s Deepfake Phishing Campaign Targeting South Korea
If a “sample” military ID landed in your inbox today, would you second-guess it? Most of us wouldn’t—which is exactly why this story matters. A North Korean threat group known as Kimsuky reportedly used AI to generate fake South Korean military agency ID cards and slipped them into a spear‑phishing campaign. The goal: make the lure feel so official that clicking a link felt routine.
Here’s why that’s unsettling. The attackers didn’t need a sophisticated graphics team or an insider who knew the ID layout. They used an AI model to design realistic images—apparently by framing the request as a “mock-up” rather than a counterfeit. And it worked well enough to push victims toward a malicious link that installed malware for data theft and remote control.
In this deep dive, we’ll unpack what happened, why deepfake images supercharge phishing, and how security teams (and high-risk individuals) can defend against this AI-accelerated tactic.
What Happened: The Short Version
According to researchers at Genians, a Korea-based cybersecurity firm, Kimsuky executed a targeted phishing campaign that:
- Impersonated a South Korean defense-related institution using a near-identical sender domain.
- Claimed to be circulating “sample” military employee ID cards for review.
- Attached PNG images of fake South Korean military IDs—assessed as deepfakes with a 98% probability.
- Delivered an additional file, LhUdPC3G.bat, which executed malware upon download to enable internal data theft and remote access.
- Targeted a narrow set of recipients: researchers in North Korean studies, human rights activists, and journalists.
- Continued the playbook of the group’s earlier ClickFix-themed operations observed in June, leveraging the same malware family.
Genians’ Security Center first detected this campaign on July 17 and described the deepfake component as a real-world application of AI in an ongoing influence-and-theft operation. You can explore Genians’ work here: Genians.
Who Is Kimsuky (a.k.a. Thallium): A Quick Primer
Kimsuky is a North Korean state-aligned threat actor known for intelligence gathering, credential theft, and strategic targeting of think tanks, academia, media, and government entities tied to the Korean peninsula.
For background and known techniques, see MITRE ATT&CK’s profile: Kimsuky (G0094).
Key traits:
- Patient social engineering. They research targets, mimic institutional language, and tailor lures to the recipient’s world.
- Credential harvesting and long-term access. Phishing isn’t the end—it’s the start. They often aim for persistence and data exfiltration.
- Adaptive TTPs. As defenders catch up, Kimsuky evolves. The shift to AI-generated imagery is a logical next step.
How AI-Generated ID Cards Supercharged the Phish
Let’s break down the lure mechanics and why they worked.
The Setup
- Sender address: Crafted to closely resemble the domain of a South Korean defense institution (typosquatting/homoglyph tactics).
- Subject and body: Framed as an administrative “draft review” of employee ID cards. Familiar, routine, and urgent enough to prompt action.
- Attachments: PNG images of “sample” military IDs. These images were identified as deepfakes with high confidence.
- Dropper: Alongside the images, a batch file (.bat) initiated malicious activities once downloaded.
In short: the attackers wrapped a classic remote control/data theft campaign in a fresh coat of AI-enhanced legitimacy.
Why Deepfake Images Work So Well
Humans are wired to trust official-looking documents. Fake IDs tap into:
- Authority bias: Government IDs scream “legitimate.” People hesitate to challenge them.
- Visual trust: Realistic design and familiar symbols disarm suspicion, even when the email context is odd.
- Procedural conformity: “Please review the draft ID” sounds like a task you’d complete without a second thought—especially in defense or research settings.
Add a clean graphic produced by an AI image model, and the lie becomes easier to swallow.
Prompt Injection, “Mock-Ups,” and Guardrails
The Genians report highlights a sensitive point: generating counterfeit government IDs is illegal and should be blocked by AI policies. But attackers can sometimes frame requests as design mock-ups or templates to bypass refusals—a tactic related to prompt injection and policy evasion.
A few important clarifications:
- LLMs and image models have usage policies to prevent misuse. See OpenAI’s guidance: OpenAI Usage Policies.
- Attackers are experimenting with phrasing to slip through guardrails. That doesn’t make the content legal or ethical—it simply shows that AI governance and enforcement must keep improving.
- For defenders, this isn’t about the model brand. It’s about recognizing that imagery—once considered low-risk in phishing—is now a high-trust payload.
If you lead security or compliance, bake this into your risk models: AI makes “good-enough” counterfeits cheap, fast, and easy to scale. The burden shifts to verification, provenance, and layered controls.
For more on synthetic media risks in the enterprise, see CISA’s resources on deepfakes: CISA: Deepfakes and Synthetic Media.
What Is “ClickFix,” and How Does This Campaign Evolve It?
Kimsuky’s earlier June activity used a “ClickFix”-style approach—social-engineering emails that drive recipients to click and “resolve” an administrative or document issue. The July wave appears to evolve the same concept with a stronger visual lure (the deepfake ID cards) and the same malware family after the initial click.
The pattern is familiar:
- Invent a light administrative task with perceived urgency.
- Add authenticity cues (logos, institutions, sample documents).
- Deliver a link or file that triggers a hidden executable or script.
- Establish control, exfiltrate data, and persist.
This time, the “authenticity” layer includes AI-generated IDs—making step 2 incredibly convincing.
Why This Matters for Defenders (and Why It’s Not Just Another Phishing Story)
Here’s the strategic shift:
- The payload looks harmless. It’s “just” an image. So casual review feels safe.
- Deepfake tools remove artisan bottlenecks. No more poor-quality graphics betraying the ruse.
- Trust signals now come from visuals, not just text or domains. Security awareness training must evolve accordingly.
- High-risk communities (researchers, activists, journalists) are prime targets. The societal stakes are higher than usual enterprise fraud.
This isn’t a one-off. Expect more adversaries to blend AI-generated visuals, audio, and text into credible lures tailored to the recipient’s world. As one security leader told me: “Assume anything that looks official might be AI.”
Likely TTPs and Tradecraft You Should Map
Without going step-by-step, here’s how to think about detection and response:
- Initial access: Spear-phishing via email with lookalike domains and convincing artifacts.
- Execution: User-initiated launch of a .bat or script-based dropper.
- Defense evasion: Masquerading as routine admin task; benign-looking attachments.
- Command and control: Outbound connections post-execution for tasking and exfiltration.
- Objectives: Credential theft, internal recon, remote control, data theft.
Map these to your ATT&CK coverage. MITRE ATT&CK is your friend here: MITRE ATT&CK.
Indicators of Suspicion: What to Look For (Even When It’s “Just an Image”)
Train your analysts and filters to flag:
- Email domain anomalies: one-letter substitutions, homoglyphs, or unusual subdomains tied to defense institutions.
- Unexpected “sample” or “draft” IDs: especially when you don’t actively handle ID issuance tasks.
- Mixed file delivery: Images arriving with scripts or batch files (e.g., PNG + .bat).
- Odd file metadata: Creator tools that don’t match institutional standards, missing or generic EXIF data, or timestamps inconsistent with the claimed source.
- Attachment anomalies: Unusual image dimensions, suspiciously high compression artifacts, or filenames that don’t match content.
- User behavior: Recipients who seldom receive admin tasks suddenly getting “urgent” review requests.
Note: Some of these checks require sandboxing, CDR (content disarm and reconstruction), and EDR telemetry. The point is to elevate even image-only attachments to a higher scrutiny tier when context looks off.
A Practical Mitigation Playbook (Security Teams)
Use these layered controls to cut risk without crushing productivity:
- Enforce DMARC, DKIM, and SPF with reject/quarantine policies for high-risk domains.
- Strictly block or detonate scripts and executables delivered via email (including .bat, .cmd, .js, .ps1, .lnk, and archives that may contain them).
- Apply CDR to images and documents from external senders. Strip macros and embedded content. Sanitize metadata.
- Quarantine “official ID” or “credential” images for manual review, especially for roles susceptible to this lure.
- Expand EDR rules for script-based execution from user-writable locations and unusual child processes (e.g., image viewers spawning shells).
- Use DNS and egress controls to prevent outbound callbacks to newly registered or low-reputation domains.
- Implement attachment sandboxing, not just link scanning. Remember: the lure may be in the image, but the dropper hides nearby.
- Harden email clients and file handlers to reduce auto-execution or “one-click” runs of downloaded files.
- Train high-risk groups with fresh content on deepfake artifacts and verification workflows. Build muscle memory for out-of-band checks.
- Deploy FIDO2 security keys and phishing-resistant MFA for all admin and high-risk accounts.
- Add DLP policies that trigger on sensitive terms when sent externally in response to unverified requests.
- Build a “verify before you comply” culture: If an email assigns you an unusual admin task, confirm via a trusted channel.
For general guidance on anti-phishing best practices and awareness, CISA maintains helpful resources: CISA Phishing Guidance.
Tailored Advice for Researchers, Activists, and Journalists
If you work on North Korea, human rights, or security policy, you’re a target—not because you did anything wrong, but because your access and influence matter. Here’s how to protect yourself:
- Treat “official-looking” documents with skepticism. A clean logo is cheap now.
- Verify out of band. If an institution asks for a review or data, call the main line or confirm via a known contact.
- Don’t download files you didn’t request. Ask for a secure portal link or a PDF with a verifiable signature.
- Use a separate, locked-down device for opening unsolicited attachments (a “sacrificial” VM or a non-persistent workspace).
- Keep your browser and OS patched. Many droppers exploit lagging updates.
- Use a password manager, enable MFA everywhere, and prefer security keys where supported.
- Report suspicious messages to your security contact or a trusted CERT. Early reporting protects your peers.
ENISA’s threat landscape reports offer broader context for Europe-based NGOs and media: ENISA Threat Landscape.
Provenance and Policy: Getting Ahead of AI-Enabled Phishing
You can’t train your way out of this alone. Organizations should invest in:
- Content provenance standards. Adopt and verify C2PA signals where possible: C2PA.
- Watermark detection and synthetic media scanning for images and audio.
- Vendor evaluation for LLMs and image tools that respects usage policies, logging, and abuse monitoring.
- Clear internal policies for generative AI use: what’s allowed, what’s logged, and how outputs are verified before external use.
- Incident response runbooks that explicitly include deepfake artifacts (visual, audio, and text).
For AI risk governance blueprinting, consult the NIST AI RMF: NIST AI Risk Management Framework.
What To Watch Next
- Multi-modal phish. Expect attackers to pair deepfake images with synthetic voice calls (vishing) or AI-generated documents to “confirm” a change request.
- More convincing brand impersonation. Not just logos, but design systems replicated by AI across email, web, and docs.
- Targeted micro-campaigns. Small batches of highly tailored lures aimed at specific researchers or NGOs—harder to catch with volume-based detection.
- Policy cat-and-mouse. As AI providers tighten guardrails, adversaries will iterate on phrasing and tool choice.
The takeaway: trust signals are getting cheaper to fake. Verification must get cheaper to perform.
Frequently Asked Questions
Q: Who is Kimsuky and what do they target? A: Kimsuky is a North Korean state-aligned threat group focused on espionage and information gathering. They often target academia, think tanks, journalists, and government-adjacent organizations related to Korean peninsula affairs. Reference: MITRE ATT&CK – Kimsuky.
Q: How do AI-generated images make phishing more dangerous? A: They boost credibility. A realistic “sample ID” or “official notice” can bypass our gut skepticism and speed up clicks. Attackers exploit authority and routine processes, making their emails feel normal.
Q: What is prompt injection in this context? A: Prompt injection refers to manipulating an AI system’s instructions or context to produce outputs it would typically refuse. In social engineering, attackers might reframe illegal requests as harmless mock-ups or templates to slip past safeguards. Organizations should treat this as a governance and detection challenge—not a how-to guide.
Q: Can email security tools detect deepfakes? A: Traditional secure email gateways weren’t built for deepfake analysis. However, layering CDR, sandboxing, image analysis, and anomaly detection (metadata, sender reputation, and behavioral signals) improves detection. Defense-in-depth is key.
Q: Is it illegal to create images of government or military IDs? A: In many jurisdictions, reproducing or fabricating government ID cards is illegal. Policies also prohibit using AI tools to generate counterfeit documents. Always consult local laws and organizational policies. See: OpenAI Usage Policies.
Q: What should I do if I think I received one of these emails? A: Don’t click links or open attachments. Report it to your security team or local CERT, verify the request via a trusted channel, and, if you already clicked, disconnect from the network and contact IT/IR immediately.
Q: What are early warning signs that an ID image might be fake? A: Context mismatches (you don’t handle ID reviews), suspicious sender domains, unusual metadata, odd image dimensions, and companion files that shouldn’t accompany an image (like .bat or .zip). When in doubt, verify.
Q: How can small NGOs and newsrooms protect themselves with limited budgets? A: Focus on high ROI steps: security keys for MFA, a strict attachment policy (CDR plus sandboxing via a managed service), locked-down shared workstations for risky tasks, frequent awareness refreshers, and outbound verification protocols.
The Bottom Line
AI has lowered the cost of credibility. In Kimsuky’s latest campaign, deepfake military ID images didn’t just decorate the email—they powered the con. The solution isn’t panic; it’s discipline. Treat visual trust signals as suspect, verify through trusted channels, and harden your stack against script-based droppers and image-borne lures.
If you lead security, elevate synthetic media to a first-class risk. If you’re a researcher, activist, or journalist, assume targeted deception is part of your threat model and adopt simple verification habits.
Want more practical breakdowns like this? Subscribe for ongoing analysis of AI-enabled threats and the defenses that actually work.
Discover more at InnoVirtuoso.com
I would love some feedback on my writing so if you have any, please don’t hesitate to leave a comment around here or in any platforms that is convenient for you.
For more on tech and other topics, explore InnoVirtuoso.com anytime. Subscribe to my newsletter and join our growing community—we’ll create something magical together. I promise, it’ll never be boring!
Stay updated with the latest news—subscribe to our newsletter today!
Thank you all—wishing you an amazing day ahead!
Read more related Articles at InnoVirtuoso
- How to Completely Turn Off Google AI on Your Android Phone
- The Best AI Jokes of the Month: February Edition
- Introducing SpoofDPI: Bypassing Deep Packet Inspection
- Getting Started with shadps4: Your Guide to the PlayStation 4 Emulator
- Sophos Pricing in 2025: A Guide to Intercept X Endpoint Protection
- The Essential Requirements for Augmented Reality: A Comprehensive Guide
- Harvard: A Legacy of Achievements and a Path Towards the Future
- Unlocking the Secrets of Prompt Engineering: 5 Must-Read Books That Will Revolutionize You