This section is by PBY Capital

Terminator? Not yet. But AI presents a real and present danger to family office security today

Text, audio and video deepfakes are getting more sophisticated, more common—and harder to detect, writes Mike Krygier in the first of a series of articles for Canadian Family Offices

Sometimes it’s tempting to view the current rise of AI, which began around the public release of OpenAI’s ChatGPT in late 2022, in the apocalyptic way movies depict it—you know, dystopian visions of robotic overlords displacing humanity (à la The Matrix), or human-like machines turning the tables on their flesh-and-blood overseers (à la 2001: A Space Odyssey or Alien), or, at least, “thinking” computers taking our jobs (à la any number of business consultancy studies). 

Story continues below

Maybe that’s just a natural reaction to new, unfamiliar technologies, and AI certainly makes for good science fiction. Of course, it’s not fiction anymore. But the reality of AI is turning out, at least so far, to be much less dramatic, but still alarming—particularly when it comes to cybersecurity. 

Photo of Mike Krygier
DeepCove’s Mike Krygier

I recently chatted about the emerging landscape with Bruce Watson, one of the world’s leading authorities on security and AI, who consults with us here at DeepCove Cybersecurity. Bruce put it this way: “Beyond the predictable reactions from people who have watched Terminator and are worried about killer robots—which might be fair worries—we also have some important, seemingly mundane things to pay attention to, and one of them is cybersecurity. There’s a tremendous amount of capability and computing power landing in the hands of people who don’t wish us well.”

He knows what he’s talking about. Based in Waterloo, Ont., Bruce is not only a PhD in computing science and a veteran IT executive, but also chief advisor to Canada’s National Security Centre of Excellence, a commissioner on the Global Commission on Responsible Artificial Intelligence in the Military Domain, and a subject matter expert to the UN Secretary General’s High-Level Advisory Body on AI. 

So, it’s no small matter when a guy like Bruce says that AI-enabled cyberattacks are a growing risk—and that ultra-high-net-worth families and family offices are increasingly in the crosshairs of cybercriminals.

Why are family offices vulnerable? One of the main reasons, I think, is that many have not kept up with enterprise security measures in the same way that larger financial services firms have. A recent global survey from the law firm Dentons found that many family offices have been slow to strengthen their cybersecurity, do not conduct risk mitigation and security training, and fail to address insider (i.e., employee and family) security threats. That puts them at an even greater disadvantage when they receive the highly polished, highly tailored messages that AI lets cyber crooks create.

Story continues below

Consider phishing emails, which are fraudulent messages designed to gain access to online accounts or steal personal information. In the past, these could often be easily detected as frauds, because they had spelling mistakes or were clumsily crafted or were too broadly targeted (for example, a phishing email that asks you to log in to an account at a bank you don’t use). Now, using generative AI, threat actors can create messages specifically designed for the intended victim, and they can be very authentic-looking and professional. Based on publicly available or stolen snippets of legitimate emails or other forms of communication, they can replicate the tone and feel of the real-person “source,” and they can be highly targeted to specific business leaders, family office staff and even family members themselves.

But it’s not just text-based communications that family offices have to be wary of. Audio and video deepfakes now can be used to create more advanced and convincing forms of phishing in other media. Using publicly available large language models like ChatGPT, a threat actor can turn a few soundbites from a business leader, for example, into a realistic-sounding simulacrum of his voice and then use it to have the executive say whatever the crook wants him to say. 

With more resources, a similar fake can be created with video—which, at a time when we are all doing video conferences on a daily basis, is a particularly disconcerting capability. AI avatars can look, sound and act the way their real-life counterparts do, and telling the fake from the real is getting increasingly difficult.

One well-documented corporate case from earlier this year shows just what AI deepfake scams can accomplish. It began when an employee at the Hong Kong branch of a multinational organization received an email from his chief financial officer, who was based in the United Kingdom. The email outlined a planned “secret transaction,” and the employee, though initially suspicious, had his fears allayed when the CFO scheduled a video conference to discuss the matter. The online meeting took place, with the CFO and other company employees in attendance. After that, and a slew of follow-up emails, texts and more video conferences, the employee in Hong Kong eventually made a series of payments to local bank accounts, believing he was acting on a company directive.

The trouble is, neither the “CFO” nor the “employees” on the calls were real. Nor were the emails and texts. They were deepfakes generated with the help of artificial intelligence. In the end, the fraudsters took the company for about $25 million.

Story continues below

Such multimedia deepfake scams are complex, time-consuming and expensive. As Bruce points outs, however, the technology is advancing rapidly. “It’s very easy to generate a fake voice note that’s relatively static and gets dumped on your voice mail and asks you to take some financial action, for instance,” he says. “But we’re already in a place where, with small amounts of computing power, you can generate audio deepfakes in real time and conduct an entire phone conversation.” 

The culprit—beyond the bad actors—is the tremendous democratization of access to artificial intelligence.

Bruce Watson

As for sophisticated AI video capable of interactivity, we probably shouldn’t take much comfort in the fact that it still requires a lot more computing power than audio or text. Few threat actors would bother using deepfake video to attack regular people, but wealthy families and family offices that oversee millions or billions in assets are a different story. “From the criminal’s perspective, if the potential payoff is measured in five, six or seven figures, then it’s well worth spending the money on the computing power,” Bruce explains. 

Remember, too, that AI’s capabilities are growing in leaps and bounds, as is computing power, and both are becoming more widely available to more people at less cost. Put all that together, and it means that deepfake scams are very likely to become more sophisticated, more convincing and more common. 

“The culprit in this—beyond the bad actors—is the tremendous democratization of access to artificial intelligence,” Bruce says. “In many ways, that’s a good development, but it also means the barrier to entry for the bad actors is now much, much lower than ever.”

From a cybersecurity perspective, the world has entered uncharted territory. It might not be populated with killer robots and evil computers just yet, but it’s the world we live in. For family offices, recognizing the reality and scope of the threat is an important first step.

In my next column, I’ll talk about some of the strategies family offices and wealthy families can deploy to mitigate the risk of AI-enabled cyberattacks. Spoiler alert: there is no techno-fix, but neither is the picture all doom and gloom.

Story continues below

Mike Krygier is CEO of DeepCove Cybersecurity, based in Toronto, which he founded in 2022 with the goal of providing industry-leading solutions to organizations in need of cybersecurity. With over two decades of experience in cybersecurity, Mike has held multiple leadership roles throughout the private and public sector, including at Google, New York City Cyber Command, and Mandiant. Mike holds a M.Sc. in Information Security from the University of London, Royal Holloway, along with various industry certifications.

The Canadian Family Offices newsletter comes out on Sundays and Wednesdays. If you are interested in stories about Canadian enterprising families, family offices and the professionals who work with them, but like your content aggregated, you can sign up for our free newsletter here.

Please visit here to see information about our standards of journalistic excellence.