The Deepfake Wasn’t the Problem. The Absence of a ‘Prove It’ Button Was:
Why Genuine Presence Beats Deepfake Detection.
Article by Jeff Highman, CTO Trua
North Korean hackers just pulled off one of the most elegant social engineering attacks I’ve seen.
Group UNC1069 hijacked a crypto executive’s Telegram account. Spent weeks building trust with targets — real conversations, real rapport. Then invited them to a Zoom call where a deepfake of the exec was waiting, live, on camera.
Note: UNC1069 is a North Korea–linked threat group that has been documented targeting cryptocurrency firms with AI-enabled social engineering, deepfake video, and a ClickFix malware chain.
While the targets sat there watching a synthetic face talk to them, the attackers deployed malware through a fake ClickFix prompt. Keychain credentials. Browser passwords. Telegram sessions. Apple Notes. Crypto wallets. All gone.
For crypto exchanges, Web3 startups, and financial institutions operating across the US, EU, and APAC, this is now a board-level risk, not a niche security story.
And now everyone’s having the wrong conversation about it.
The Wrong Conversation
The security industry is doing what it always does: scrambling to fix the front door.
Better identity verification. Stronger authentication. DIDs. Biometric login. Hardware tokens. All of it focused on the same flawed assumption: if we can just prove who someone is at the start, we’re safe.
We’re not.
The deepfake attack didn’t fail at authentication. Nobody bypassed a login. Nobody cracked a password. The attackers walked through a door that was already open — a hijacked Telegram account with weeks of legitimate conversation history — and then stayed in the room unchallenged.
The problem isn’t how people get in. It’s that once they’re in, nobody ever asks again.
Genuine Presence
Here’s the concept the entire industry is sleeping on: genuine presence.
Not “did you authenticate?” but “are you actually here, right now, in this moment?”
In security terms, genuine presence means verifying that the right human is interacting in real time, not just that a device was authenticated once at login.
Think about how every digital interaction works today. You log in. You’re verified. And then… that’s it. The system assumes you’re you for the entire session. The entire meeting. The entire transaction chain. One checkpoint, then blind trust forever.
That’s insane!
Genuine presence means the ability to be challenged at any point in an interaction. Mid-meeting. Mid-transaction. Mid-conversation. Not as a disruption — as a capability. A mechanism that exists and can be invoked whenever the stakes change or something feels wrong.
The Building Analogy
Your badge gets you into the building lobby. Fine.
But imagine a building where that badge swipe at the front door is the only security check. Every floor, every office, every server room, every executive suite — all open once you’re past the lobby.
Absurd, right? No building works that way. Every sensitive area has its own access control. The building doesn’t assume you’re still authorized just because you were authorized at the entrance.
Now look at a Zoom call. A Telegram chat. A financial video conference.
One entry point. Zero re-verification. The entire interaction operates on lobby-level trust. That’s exactly what UNC1069 exploited. They got past the lobby — the Telegram account — and then operated with unchallenged access for the entire attack chain.
Step-Up Verification
The fix isn’t better authentication at the front door. It’s step-up verification throughout the journey. Step-up verification is the idea of escalating authentication requirements when risk increases, instead of trusting a single point-in-time login.
The concept is simple: the ability to escalate proof of presence at any point in an interaction, proportional to what’s at stake.
Casual conversation? Show up. That’s fine.
Someone asks you to click a link? Step up.
About to authorize a transaction? Step up again.
Something feels off about a participant? Challenge them. Right now. In the moment.
This isn’t theoretical. Step-up authentication exists in banking — your app might let you check your balance with a PIN but require biometrics for a transfer. The pattern is proven. But nobody’s deploying it where humans actually make trust decisions: in meetings, in chats, in live conversations.
Why Current Tools Are Blind
Every major meeting platform — Zoom, Teams, Google Meet, WebEx — has the same fundamental flaw: zero step-up capability.
Once a participant is “in” the call, their presence is assumed for the duration. There is no mechanism for another participant to say “something feels off — prove yourself.” No challenge button. No presence re-verification. Nothing.
Chat platforms are worse. Telegram, WhatsApp, Signal — they authenticate the device, not the human. Once a session is active, whoever controls that session is that person as far as every other participant is concerned.
The UNC1069 attackers didn’t need to beat sophisticated security. They needed to beat nothing. Because after the initial access, there was nothing left to beat.
What “Challenge Presence” Actually Looks Like
This doesn’t have to be complicated or disruptive.
A button in every meeting platform: “Verify Participant.” Click it, and the challenged participant completes a quick presence proof — biometric, hardware token, a response that a deepfake can’t fake in real-time. Could be as simple as a live challenge-response that requires genuine human interaction with their own verified device.
Could DIDs play a role here? Sure — as one mechanism among many for anchoring a step-up response to a verified identity. But the DID isn’t the point. The capability is the point. The ability to say, at any moment, “prove you’re actually here.”
The technology exists. Biometric verification takes less than a second. Hardware tokens are ubiquitous. Challenge-response protocols are well understood. The pieces are all there. What’s missing is the integration. Nobody has wired these capabilities into the places where humans are actually being deceived.
Continuous Trust, Not Point-in-Time Trust
The deeper shift is philosophical.
We’ve built every digital system around point-in-time trust. Authenticate once, assume forever. It’s the digital equivalent of checking someone’s ID at a bar and then assuming everyone in the building for the rest of the night is over 21.
Genuine presence requires continuous trust. The understanding that verification isn’t a gate you pass through — it’s a state you maintain. And that state can be challenged, re-established, or escalated at any point in the journey.
This isn’t about making interactions harder. It’s about making them honest. Most of the time, step-up verification sits dormant. You never notice it. But the moment something matters — the moment stakes rise, the moment something feels wrong — the mechanism is there.
If you run remote teams, distributed finance operations, or global hiring over video, your meeting stack needs a ‘challenge presence’ pattern baked in.
The Call to Action
Every platform that facilitates human-to-human interaction needs a “challenge presence” capability. Period.
Every meeting tool. Every chat app. Every financial call platform. Every hiring interview conducted over video.
The UNC1069 attack worked because a deepfake sat unchallenged on a video call. That’s not a technology failure. That’s a design failure. We built communication tools that assume presence from pixels and never question it again.
The deepfake problem isn’t going to be solved by better detection algorithms. Deepfakes will always get better. The solution is giving humans the ability to challenge what they’re seeing, in real-time, at any moment.
Stop building taller walls at the front door. Start building the ability to ask “are you still you?” at every step of the journey. Because the next deepfake won’t be less convincing. It’ll be more. And right now, there’s no button to push when something feels wrong.
Questions & Answers
A deepfake video call attack uses AI-generated or replayed video in a live meeting to impersonate a trusted person and push victims into actions like running malware or authorizing payments.
The group hijacked a trusted Telegram account, scheduled a fake Zoom-style meeting, and used a deepfake CEO video plus a ClickFix prompt to deploy multiple malware families and drain sensitive data and wallets.
It is a step-up verification control inside meetings or chats that lets participants trigger real-time identity checks—like biometrics or hardware tokens—whenever something feels off.
Because deepfakes are improving fast, and most attacks exploit blind trust in authenticated sessions, we need mechanisms for continuous trust, not just better front-door checks.