If you got a video call from your CEO asking you to send sensitive documents over to them ASAP, would you do it?
In a world where we are constantly connected through our devices this request may seem genuine at first. Our always-online nature has conditioned us to expect results faster, gone are the days of handing off documents in person.
Whether you think this is a blessing or a curse, it is a tremendous point of vulnerability that you can expect bad actors to exploit significantly. When your boss calls asking for your help in finding some misplaced documents that he needs for an upcoming meeting, you answer that call immediately.
But, how do you know that face on your screen is actually your boss?
Deepfakes Create New Security Concerns
Recently, the Biden administration said it was “alarmed” by the circulation of explicit, AI-generated images of Taylor Swift. The images are a dangerous and harmful use of AI that has caused Swift and countless others significant distress.
Deepfakes do not require a deep understanding of technology or an artist’s eye. The technology is available and troublingly good at making real-time masks of an individual.
After some quick scans, you can appear as a celebrity, a cartoon character, or an important political figure on your computer. If you think this damage is exclusive to celebrities, however, you aren’t seeing the bigger threat here.
You might not think twice if your manager, client, or even a well-known contractor reaches out to you. However, if Deepfake technology can effortlessly impersonate a celebrity and fool hundreds of people online, they can impersonate people in your life as well.
High-Stakes Mistakes
Let’s look at an example: Your boss is video chatting with you, their face appears on your screen and says, convincingly, that they need you to send them everything you have on “Client X.” You send it over and pat yourself on the back for a job well done. Client X is completely exposed, and you will likely face a lawsuit, and your other clients will demand answers.
Who is at fault?
Can you blame an employee for sending sensitive information seemingly at their CEO’s request? In a world where we’ve normalized video calls, we’ve foregone the implicit security that comes with handing off precious materials face-to-face.
Not all of these bad actors are interested in corporate espionage or stealing credit card information. Some of them only want to bring chaos to your company. Imagine if they made a deepfake of a public executive at your company, having them make fringe statements that might scare off investors. This could cause serious damage to the perception of your company, even if you feel like people should easily be able to spot a fake.
For example, in 2022 a fake Ely Lilly account on Twitter took their stock down by 4.37% in a single Tweet. A company’s public image is important, and sadly you can’t expect the average online user to do their own research into the validity of what they see online. So the real question is, “How can you continue to connect with your customers online while also making sure they know all interactions are coming from the real source?”
Proving Genuine Presence
Even the strongest of passwords are not unbreakable. If someone hacks your email or social media, they can make posts online under your name. AI technology adds a new layer of complexity to this problem, as we’ve seen with what happened with Ely Lilly. The only true forms of security unique to a person are their biometric and biographic data.
Combining biometric and biographic information allows you to create an ecosystem in which everyone has confirmed their identity before you start communicating. Technology such as TruaID uses biometric information to restrict logins to users who can prove a “genuine presence” when signing on to a platform. Just like using a face ID scan to open your phone, biometric data should be required to access emails, payroll, workflow applications like Slack or Teams, or any other sensitive system.
Make no mistake, your company is worth attacking. Of all the negative things you can say about hackers and scammers, they are clever above all else. AI will be the next tool they use to crack your security, and it only takes one successful attack to derail your business. Whether you work for a billion-dollar business or a mom-and-pop shop, you could be targeted.
This is the time to become leaders of our industries, to tackle these problems head-on we need to address them at the source, that is providing proof of ID. Liveness detection is the next step in security, soon users will be expected to confirm they are the one behind the keyboard and not an AI recreation of their face. Nobody should ever be deceived by someone who looks and sounds like their boss. Deepfakes can mimic your face and voice, but not your biometrics. We need to embrace biometric security backed by verified biographic info from the person, as we move forward in this ever-evolving digital age, to rebuild trust and safety in our professional lives.