They Shook Your Hand First

How state-sponsored attackers bypass everything your audit checks.

Juan Jaramillo

Drift lost $285 million. Bybit lost $1.5 billion. Same group. You know the stories.

What most teams haven't worked out is what to actually change.

DPRK went to conferences, built relationships, deposited capital, and waited. Six months later, they moved. By then, the people they were targeting had shaken hands with them across multiple continents. The relationship felt real because it was designed to.

Your contracts can be perfectly audited and this could still happen to you. The gap is operational and most DeFi teams have not closed it.

Before we get started, we just want to remind you that writing this is the easy part.

This could have happened to any team. The attack was designed to be invisible. Nobody walks into a six-month business relationship expecting the other side to be a state-sponsored operation.

The playbook is documented now. What follows is what protecting against it actually looks like:

Vetting Counterparties

A trading firm wants to integrate. Standard request. Hundreds of protocols handle this every month.

"Legitimate trading firm" is now a role state-sponsored actors play well:

  • Real on-chain history
  • Detailed strategy conversations
  • Capital deployed into your protocol
  • Months of behaving exactly as expected

Drift followed this pattern. The relationship was real. The compromise didn’t come from access abuse. It came from a repo shared during a normal workflow.

That’s the part most teams miss.

You are not just vetting who they are, you are inheriting everything they send you.

You cannot background-check every engineer behind every firm. Even if you could, it would not have stopped this. The compromise happened inside a trusted relationship.

What you can control is what they are allowed to do to your systems:

Before granting access

  • Verify the firm exists beyond its website
  • Independently research the founders and team members
  • Check on-chain history of every wallet
  • Treat all integration requests with baseline skepticism

Once access is granted

Treat everything they send you as untrusted.

  • Never clone repos or run code on your primary machine
  • Use isolated environments every time
  • Separate development from signing
  • No signing keys on development devices
  • Use dedicated hardware for signing

Operational controls

  • Minimum permissions only
  • Time-limited access
  • Scheduled access audits
  • Same-day offboarding

Verifying People

DPRK operatives have been contributing to DeFi protocols since DeFi Summer. Researchers have identified 40+ major protocols that employed them at some point. The “seven years of blockchain experience” on their resume is not “fake”, they, in fact, have been doing this since the beginning. Some are still active contributors at protocols right now.

Now, many of these “employees” passed normal hiring processes but this is not a problem you solve with “better intuition” on calls:

Before the interview

  • Cross-check work history directly
  • Contact previous employers through independent channels
  • Review GitHub history for inconsistencies
  • Run emails and handles through breach databases

During the interview

  • Watch for deepfake behavior
  • Ask candidates to perform simple real-time verification (turn sideways, hold ID)
  • Evaluate carefully how they think, not just what they say
  • Verify where the work device actually is (not just where they claim to be)
  • Watch for shipping address mismatches
  • Flag installation of remote admin or screen-sharing tools early
  • Require hardware-based MFA for all access

Then there is the Kim Jong-un test. Some teams have started asking candidates to say something critical about North Korean leadership on the call. It sounds like a lot. It is not. Operatives will not do it regardless of how deep the cover story runs. Apparently loyalty surpasses the assignment. Make of that what you will.

First 30 days

  • Read-only access only
  • No production or multisig access
  • Document all permissions
  • Watch for repeated access requests

Offboarding

  • Same-day revocation
  • Rotate all secrets
  • Review commit history

The shift is:

Conference Opsec

Conference floors are where these relationships start. Your vetting process is built for known threat models. A six-month in-person relationship sits outside that model entirely. That asymmetry is the strategy.

  • Skip this section and everything else in this article becomes harder to protect against.
  • Never demo on your main machine at side events. Bring a dedicated device with nothing sensitive on it, or don't demo at all.
  • Never screen-share a device with protocol access or multisig credentials. Screenshots only.
  • Use your phone as a hotspot, never connect to conference WiFi on a work device. Conference networks are monitored by anyone with a $30 device and a reason to watch.
  • Be skeptical of persistent follow-up from people you met once.
  • If someone from a side event sends you a repo, a link, or an app to try: isolated environment or don't open it. 

Device and Signing Hygiene

Two normal actions caused compromise: Cloning a repo and Installing a “TestFlight” wallet app

  • Never clone a repo from a counterparty on your main machine. Disposable VM, every time.
  • Never download apps or tools sent by external teams regardless of relationship length.
  • Keep your editors updated. A patch that exists but wasn't applied is not protection.

On signing

Blind signing is how nine-figure losses happen(ByBit). Malicious JavaScript injected into a trusted interface shows you a legitimate transaction while executing a different one. Signers approve what they see, not what's actually running.

  • Always verify transactions independently
  • Use separate tools or devices
  • Use hardware keys for all critical access
  • Rotate sessions immediately after offboarding
  • Audit browser extensions across the team. Any unvetted extension is a potential keylogger. This takes 20 minutes. 

Access Control and Protocol Design

With admin access compromised, the protocol was drained in twelve minutes.

Part of what made that possible: a 2/5 multisig with no timeouts, and durable nonces that allowed transactions to be pre-signed and held. Signatures collected gradually over time, no alarms raised, everything executed at once when the moment came.

A 2/5 threshold is one of the most common multisig configurations in DeFi but It creates a false sense of protection when there are no timeouts on execution and no restrictions on durable nonces. 

A more responsible setup:

  • Mandatory execution delay between a transaction being fully signed and it going through, giving the team a window to catch and cancel
  • Restrictions on durable nonces for high-value operations
  • A separate cancellation key that can veto any queued transaction during the delay window 

Beyond the multisig configuration: Adding power should be slow. Removing power should be fast.

  • Timelocks on admin transfers. Any proposal to change control of a critical role needs a mandatory waiting period. That window is your intervention window.
  • Separate your pause function from your resume function. Fast keys halt things. Resuming requires a slower, higher-trust process.
  • A circuit breaker: a global emergency stop any guardian can trigger immediately, but only a slower authority can undo.
  • Multiple admin roles with narrow, documented permissions. A compromised key should not be able to do everything. This belongs in your architecture review before deployment, not as an afterthought. 

Supply Chain

A $1.5 billion loss started at a developer's workstation compromised days earlier. By the time the team signed a routine transfer, the interface they trusted was running different code.

Key practices

  • Audit all dependencies
  • Verify frontend integrity
  • Understand where code is loaded from

One more thing. DPRK runs this in reverse too. They set up fake crypto companies and post real jobs. Developers apply, go through a technical interview, and get compromised during the coding assessment. Confirmed front companies: BlockNovas LLC, Angeloper Agency, SoftGlide LLC. If a company asks you to clone and run a repo as part of a technical screen, that is a known attack pattern. Isolated environment, every time.

What You're Already Leaking

Before anyone approaches your team, they have already studied you.

Your Telegram group. Your Notion workspace. Your Discord with an "internal" channel that is not actually restricted. Your team's LinkedIn profiles listing every tool you use, every protocol you have worked on, and where your contributors will be next month.

All of it is an intelligence feed for anyone patient enough to read it.

Audit who actually has access to your internal documentation right now. Former contributors, trial hires from six months ago, the auditor who needed temporary access for one review. That list is longer than you think. Remove people who no longer need access today.

Never discuss deployment timelines or key rotation schedules in any group channel. 

Incident Response

Most teams discover they have no incident response plan at the exact moment they need one.

If something feels wrong: freeze first, investigate second. Every minute before you freeze is a window for evidence to disappear.

If something feels wrong

  1. Freeze immediately
  2. Preserve evidence. Do not wipe devices. You lose critical evidence.
  3. Contact @SEAL911
  4. Engage forensics
  5. Publishing incidents helps others avoid the same failure.

The Gap EVERYONE should be talking about

The gap between "our contracts are audited", and "our operation is secure" is exactly where these attacks live.

The threat is patient, well-funded, and designed specifically to pass every check your team currently runs. A perfect audit report next to a compromised signing device is not security.

Closing this gap requires a different kind of review. One that looks at who has access to your infrastructure, how your team handles devices and signing, what your counterparty vetting process actually looks like, and whether your protocol design survives twelve minutes with compromised keys. 

At Adevar Labs, we already audit your code and infrastructure. Now we are building the operational layer on top. A review of how your team actually works, because that is where these attacks land.

TL;DR

  • Audit reports do not protect against trusted relationships
  • Control what counterparties can do
  • Hiring processes must account for state-sponsored actors
  • Conferences are high-risk environments
  • Never blind sign transactions
  • Fix weak multisig setups
  • Audit internal access regularly
  • If something feels wrong, freeze first and call @SEAL911
  • Code and infrastructure audits are not enough. The operational layer needs to be reviewed too.

Sources: Drift post-mortem (April 2026), Bybit/Sygnia report (Feb 2025), TRM Labs, Elliptic, Mandiant UNC4746, Taylor Monahan, SEAL911, Chainalysis 2026 Crypto Crime Report.