Combine ‘deep’ from deep learning with ‘fake’ and you have ‘deepfake’ – a method using artificial intelligence to create human images often with a lip-synched voice. The end result can be highly convincing, and the intention is often to spoof a celebrity or politician with the aim of discrediting them. But could there also be wider dangers?

This certainly seems to be the case, not least with interfering with elections. Facebook said in January, it would be removing any videos created by AI and meanwhile, last year, the Wall Street Journal cited a case that is particularly alarming, involving a ‘cloned’ voice.

Who’s on the phone?

It reported on the CEO of an energy firm in the UK being fooled into believing a call from his boss in the German parent company was genuine. The UK man was told to transfer €220,000 as a matter of urgency to an account in Hungary and did this because the caller sounded identical to the German, who he’d spoken to many times previously.

The case was recounted by the company’s insurer, Euler Hermes, and its CEO, Rüdiger Kirsch, says he believes voice hacking technology was used. Once the payment had been made, there were two subsequent calls, one to say that the payment had not gone through and another to request the payment needed transferring again. By this time, the UK CEO became suspicious, and he noted the call was being made from Austria and not Germany. But it was too late, the first payment was successful, and it was subsequently found it had been swiftly moved from the Hungarian account to one in Mexico and then other locations.

Criminals with the know-how can use AI to create synthetic audio. And then there are AI tools called encoders and decoders that can help switch facial features – there is even an app called Zao which allows users to add their faces to a list of TV and movie characters. But it is not about simply looking like a celebrity, this technology could allow criminals or perhaps even disgruntled former employees to create malicious content that could be highly damaging.

Growing concern

Meanwhile, a survey this year of chief information security officers working in financial services, found that there was growing concern about deepfakes. The research from biometric technology firm iProov found some 77% of these professionals are worried about the potential for deepfakes to be used fraudulently – with online payments and personal banking services thought to be most at risk. Despite this, only 28% had taken any action.

According to iProov’s CEO, Andrew Bud:

“It’s likely that so few organizations have taken such action because they’re unaware of how quickly this technology is evolving. The latest deepfakes are so good they will convince most people and systems, and they’re only going to become more realistic.”

Of particular concern to decision makers was the potential for deepfake images to be able to compromise facial recognition defenses. “The era in which we can believe the evidence of our own eyes is ending. Without technology to help us identify fakery, every moving and still image will, in future, become suspect,” said Bud.

Although the internet is also impossible to fully police, it has been reported that Google has released deepfake samples to enable researchers to find detection technologies, while Facebook and Microsoft are working with universities to offer awards to researchers who develop ways of spotting and preventing the spread of manipulated media. As for voice fraud, this is probably the most concerning for financial services. It is understood that voice hacks are largely blocked when humans take the calls – but growing use of automated voice recognition systems could be easier to penetrate.

There is clearly potential for criminals to leverage AI for fraudulent purposes and so now is the time for the financial services sector to be on its guard through training its people, adding in extra verification and investing in counter technology that will intercept discrepancies.