The scope nobody wants to go on record about

In mid-2022, and again with an update in 2024, the FBI, State Department, and Treasury Department's Office of Foreign Assets Control issued a joint advisory warning US companies about a coordinated effort by the Democratic People's Republic of Korea to place IT workers inside foreign employers through fraudulent remote-hire schemes. The advisory said the revenue from this program is a primary funding source for North Korea's weapons of mass destruction and ballistic missile programs, and estimated that the DPRK earns hundreds of millions of dollars per year from the scheme.

Google's Mandiant threat intelligence team tracks the activity cluster as UNC5267. CrowdStrike tracks overlapping tradecraft as FAMOUS CHOLLIMA. Both firms have described the operation as ongoing, large-scale, and specifically targeting US tech companies offering fully remote engineering roles.

The numbers that have made the public record are striking. The DOJ has indicted individual co-conspirators whose single laptop farms serviced placements at more than 300 US companies. Individual fraudulent hires generate six-figure annual salaries, with the funds routed through US payroll, converted to cryptocurrency, and remitted back to Pyongyang. Treasury's framing is unambiguous: if you hire one, you are paying a foreign adversary to build nuclear weapons, and you may be in violation of US sanctions. Strict liability applies regardless of whether the employer was deceived.

Most hiring teams still have no screening process that would catch one. The reason is simple: fraud at this sophistication doesn't fail background checks. It fails pipeline-stage identity verification, and most US companies don't run any.

Case study: the KnowBe4 hire

KnowBe4 is one of the largest security-awareness training companies in the world. In July 2024, CEO Stu Sjouwerman published a public disclosure that his company had hired a North Korean IT worker through an ordinary hiring process. The candidate interviewed four times over video. The resume was clean. The LinkedIn profile was plausible. The references checked out. The background check came back clean.

The moment KnowBe4's newly shipped laptop was powered on at the candidate's US address, the company's own endpoint detection agent fired. The new hire was attempting to load known North Korean malware. Security shut the device down within minutes and opened an investigation.

The reconstructed picture looked like this. The person on the video calls was using a real US citizen's identity, obtained through some combination of purchase and data breach. The company directory headshot appeared to have been produced by a generative-AI image service. The "US address" was a residential house in a low-cost-of-living state whose occupant was a paid co-conspirator. The laptop was shipped to that address, plugged in, and left running — but controlled, in real time, over a VPN and KVM, by an operator believed to be working out of North Korea or the Russian Far East.

KnowBe4 is an unusual case because the company disclosed. Most companies that catch one do not. They conduct a quiet internal investigation, terminate the employee, rotate credentials, and move on. The publicly visible set of these cases is therefore a radical undercount of the real one.

Case study: the Christina Chapman laptop farm

In May 2024, the US Department of Justice indicted Christina Marie Chapman, an Arizona resident, for running a laptop farm out of her home. At the time of the indictment, the DOJ alleged that the scheme had generated more than $6.8 million in wages from more than 300 US companies, including several named Fortune 500 firms. Chapman's role was operationally simple: receive the company-issued laptops mailed by unsuspecting employers, install them on her home network, and leave them powered on with remote-access software configured. The DPRK operators connected from overseas. To each employer, the device was sitting quietly in Arizona, geolocating correctly, logging expected working hours.

At peak, Chapman's house held roughly 90 laptops. Each one represented a full-time fraudulent employee drawing a US tech salary.

The Chapman case is important because it destroys a reassuring assumption. Employers have long relied on a chain of physical-infrastructure signals — US-based IP addresses, US shipping addresses, US-denominated bank accounts — as a proxy for "this person is actually a US-based employee." The laptop farm defeats every single one of those signals without defeating any identity verification performed at intake, because there is no intake identity verification to defeat.

After Chapman, the DOJ announced several additional indictments in 2024 and 2025 against other operators of similar facilities. The pattern has not stopped. It has scaled.

How the scheme actually works, step by step

The DPRK IT worker program is not opportunistic. It is a structured placement pipeline. Understanding it as a pipeline is the key to defending against it as one.

Stage 1 — Identity acquisition

Operators begin with a US identity. Sometimes this is purchased on dark-web marketplaces from vendors that resell credentials from prior data breaches. Sometimes it is a willing co-conspirator who has agreed to lend their name and SSN in exchange for a cut. The pipeline prefers identities with clean records, US-based credit histories, and education at plausible mid-tier US universities whose alumni lookups are hard to cross-check.

Stage 2 — Persona construction

The operator builds a full digital persona on top of the identity: a LinkedIn profile, a GitHub account, a resume, a headshot. In 2022 and 2023 the headshots were often stock photos or real photos of unrelated people. By 2024, most were generated by diffusion-based image models, which produce a plausible single image but — as KnowBe4's case showed — frequently fail a reverse-image search because the same face gets reused across multiple personas before the image is regenerated.

Stage 3 — Pipeline seeding

Operators apply through mainstream job boards — LinkedIn, Indeed, Hired, Wellfound, direct company career pages. They target remote-first companies and prefer roles that do not require in-person onboarding: software engineering, DevOps, data engineering, analytics, sometimes contract cybersecurity work. They apply at volume, because only a small percentage of applications convert to interviews.

Stage 4 — Interview execution

Interviews are conducted over video. Some operators work the calls themselves using voice-changing and real-time translation software; others use proxy interviewees — actual US citizens hired to appear on camera and perform the interview, who then hand off the role the day the laptop arrives. The most sophisticated cases use live deepfake overlays, though these are still rare in the hiring context as of 2026. More common are blurred or virtual backgrounds, poor lighting, and pre-rehearsed answers to the common engineering screens.

Stage 5 — Offer and onboarding

Once an offer is made, the operator provides a US address — a laptop farm — as the shipping destination. The employer ships the laptop. The farm plugs it in. The operator connects from overseas. The employee begins drawing a salary that gets deposited into a US bank account opened in the stolen identity's name, then steadily laundered through cryptocurrency exchanges and returned to North Korea.

Stage 6 — Exploitation

Most operators keep their heads down and simply collect a paycheck. Some, like the KnowBe4 hire, attempt more aggressive exploitation — installing malware, exfiltrating source code, staging a follow-on attack. Mandiant and CrowdStrike both describe cases of DPRK operators pivoting from legitimate employment into supply-chain compromises of their employers' customers. The opportunistic espionage is a bonus; the paycheck is the product.

The 15 signals that expose these candidates

No single one of these is conclusive. Real candidates occasionally trip any one of them — people move, switch jobs, rebuild profiles. Three or more of these clustering on a single applicant is the threshold at which most hiring teams should pause and escalate.

Contact infrastructure signals

  • Contact details that appear to have been created for this application. Phone and email without any prior association to the candidate's name are a recurring flag. A real engineer working remote in Boise has usually held the same personal email and phone for years; contact channels that look freshly provisioned, in combination with other signals, are worth a second look.
  • Email address with no reverse-search footprint. Run the address through Google, Have I Been Pwned, and a couple of people-search engines. A real candidate's email usually appears somewhere — in a GitHub commit, an old forum post, a newsletter signup. A freshly created address with zero search footprint, on a mid-career professional, is a flag.
  • Email domain with a local part that does not match the claimed name. jsmith8472@gmail.com on someone whose LinkedIn says John Smith is fine. davidwilson2020@gmail.com on someone whose resume says Rebecca Chen is not.
  • Free email provider when the candidate claims to work at a company with its own domain. Most real mid-career candidates still use a professional email or a personal address they have had for years, not a brand-new Gmail or Outlook.com address spun up recently.

Digital footprint signals

  • LinkedIn profile under six months old with sparse employer-cross-references. A real engineer who has worked at three companies over eight years will have LinkedIn connections at each of those companies. A fraudulent profile usually has a thin connection graph concentrated on connectors-for-hire.
  • GitHub with no public activity, or activity that looks machine-generated. Private-only contribution histories, repos forked and never touched, commits at suspiciously regular intervals all suggest a profile built to look populated rather than used.
  • Reverse-image search of the headshot returns unrelated profiles. AI-generated images often get reused across multiple personas. Reverse-searching a headshot is a ten-second check that has caught well-documented cases.

Interview behavior signals

  • Reluctance to use a camera, or heavy use of virtual backgrounds. Occasionally legitimate — but combined with other signals, a tell. The virtual background hides both the physical location and any ambient audio that would contradict a stated geography.
  • Audio latency mismatched to the claimed timezone. A candidate who says they are in Denver but whose audio round-trip latency looks like a transpacific path is worth a note.
  • Specific knowledge gaps about the claimed most-recent employer. Fraudsters know the resume, not the office. Questions like "what IDE did your last team standardize on," "what was your deploy workflow for production," or "who was your skip-level" expose the gap if asked conversationally.
  • Refusal to switch to a different video platform mid-process. A real candidate will switch from Zoom to Google Meet to Teams without hesitation. A candidate running a proxy or a deepfake overlay may resist because the setup is pre-configured for one specific platform.

Logistical signals

  • Shipping address in a region known for laptop-farm activity. The public indictments have clustered in low-cost-of-living states — Arizona, Tennessee, Florida, rural Pennsylvania. Not a reason to reject a candidate, but worth cross-checking against the rest of the signal stack.
  • Shipping address different from the address on the resume or offer letter. Real candidates occasionally provide an alternate shipping address. Combined with other signals, a strong flag.
  • Bank account that was opened within the last 12 months at an online-only bank. Not always possible to check, but sometimes surfaces during payroll setup. Fraudsters frequently use online-only banks because the account-opening KYC is less rigorous than at a branch.
  • Reference contact details that fail the same cross-source identity checks as the candidate's own. A reference whose phone and email don't independently resolve to a real, publicly-traceable person — especially when the candidate's didn't either — often means the same operator is running both sides of the call.

A pipeline-stage defense process

The goal is to catch fraud before you have spent significant time or money on a candidate. A practical process looks like this:

  1. At application intake, require phone and email on every form. No exceptions, even for referrals. Without contact metadata you have nothing to verify.
  2. Run a verification pass before the recruiter screen. Email domain and reverse-search footprint. Name consistency across phone, email, and resume. Public-source traces tied back to the candidate's own identity. If you use a tool, this takes twenty seconds. If you do it manually, five to ten minutes.
  3. Cross-check the LinkedIn profile. Connections at claimed employers. Account age. Employment-date consistency with the resume.
  4. Reverse-search the headshot. Google Images, TinEye. Ten seconds.
  5. Ask one verifiable question about the most-recent employer in the recruiter screen. Something a resume cannot answer.
  6. At offer stage, verify the shipping address. Look it up. Is it a single-family home? Multi-unit? Commercial? Does anything about it cluster with prior laptop-farm indictments?
  7. On the first day, require the new hire to be visible on video at a non-virtual-background location. A one-time check, framed as part of the onboarding photo for the directory.

Each step is cheap. The failure mode of not running them — a hired DPRK operator drawing salary for six months before detection — is expensive, and the downside scenario (malware, source-code exfiltration, sanctions exposure) is not an ordinary HR risk.

A note on overcorrection. Every one of the signals above has legitimate explanations. Privacy-forward email services are the default for some entire demographics. Virtual backgrounds are a privacy preference. Sparse GitHub activity is the norm for most working engineers. The failure mode is not "reject anyone who trips a signal." It is "pause when multiple signals cluster, and look more carefully before committing more calendar time."

What to do if you think you've found one

First, do not confront the candidate. A confrontation tips the operator off and destroys any forensic value in the investigation. The candidate disappears, the identity gets retired, the laptop farm receives a different device under a different name, and you have learned nothing.

Instead:

  1. Freeze the hiring process. Pause any offer, any background-check order, any equipment shipping. If the candidate is already hired, pause payroll and engage your incident response process as you would for an insider threat.
  2. Document the signals. Screenshot the LinkedIn profile, the resume, the email address, the phone number. Save any recorded interviews if you have them.
  3. Escalate internally. Route to your security or legal team. Large employers have named incident-response contacts; smaller employers should loop in outside counsel before the next step.
  4. Report to the FBI. The Bureau's Internet Crime Complaint Center at ic3.gov is the canonical channel. Most field offices also have a cyber squad that accepts direct tips. The FBI has stated publicly that it wants these reports and that the intelligence value of aggregated reporting is high.
  5. Share anonymized signals with peers. Industry information-sharing groups — the Retail and Hospitality ISAC, Financial Services ISAC, and sector-specific CISO forums — have been increasingly useful distribution channels for persona-level indicators of compromise.

Why this is a recruiter problem, not a security problem

The instinct inside most companies is to treat DPRK IT worker fraud as a security team issue. It is not — or rather, by the time it is a security team issue, the company has already committed to the candidate. The offer has been extended. The background check has been paid for. The laptop has been shipped. The security team sees the fraud only when the device comes online and does something anomalous, which is exactly what happened at KnowBe4.

The most efficient point in the hiring funnel to catch this is at the top of it. Catching it at intake is hundreds of times cheaper than catching it at onboarding, and thousands of times cheaper than catching it after the employee has had access to production systems. The security team's job is the backstop. The recruiter's job is the filter.

That reframing has an important implication: the budget and authority to solve this problem belong to talent acquisition, not to information security. A $49-per-month Chrome extension that runs at the recruiter's desktop intercepts fraud at the exact stage where intervention is cheapest. A $50,000-per-year enterprise insider-threat platform intercepts it at the stage where intervention is most expensive. Most hiring teams are funding the latter without funding the former, which is a structural inefficiency that the next twelve months of industry attention will, in our view, correct.

FAQ

Is hiring a North Korean IT worker illegal under US law?

Yes. Treasury's OFAC has sanctioned the DPRK and entities tied to its IT worker program. Paying wages to a sanctioned party violates those sanctions regardless of the employer's intent or knowledge — strict liability applies. Enforcement has focused on the facilitators (laptop farm operators, co-conspirators) more than on deceived employers, but the legal exposure is real, and a hire that is later identified as a sanctioned person creates a disclosure obligation in addition to the remediation work.

How can I identify fraudulent candidates tied to this scheme in my pipeline?

The strongest signals cluster on contact infrastructure and interview behavior. Contact details (phone and email) with no prior public association to the candidate's name. An email address with no reverse-search footprint. A LinkedIn profile under six months old with sparse connections to claimed employers. Reluctance to use a camera, or heavy virtual-background use. A shipping address that resolves to a single-family home in a region with a recent laptop-farm indictment. Three or more of these stacking on a single candidate is a strong indicator.

What is a laptop farm?

A laptop farm is a US-based residence where a co-conspirator receives and hosts company-issued laptops for remote hires. The actual operator — based overseas — connects to each device remotely via VPN and hardware or software KVM. To the employer, the device appears to be sitting in the US and geolocating correctly. To the operator, it is a fully-remote, employer-provisioned workstation. The May 2024 indictment of Christina Chapman described a laptop farm in Arizona that hosted roughly 90 devices for more than 300 US companies.

What should I do if I suspect one has applied to my company?

Do not confront the candidate. Freeze the process, document the signals, and escalate to security and legal. Report to the FBI through ic3.gov or a direct field office contact. If the candidate has already been hired, engage incident response the same way you would for any other insider threat.

Are small companies targets, or is this only a Fortune 500 problem?

Both. The Chapman indictment described placements at more than 300 companies — not all of them large. Fully-remote-hiring SMBs in engineering and data roles are frequently targeted because their process tends to be more forgiving and their security-team scrutiny of new hires is typically lower. The scheme is a volume operation; every placement is revenue.

Does a standard background check catch DPRK IT workers?

Usually not. The program uses real, clean US identities — often obtained from real US citizens who have agreed to participate, or stolen from public data breaches. A standard SSN trace and criminal record check will return clean because the identity, on paper, is clean. The person behind the identity is a separate problem, which is what pipeline-stage identity verification is for.