Hi Ben, thanks for speaking with Digital Bulletin. We hear a lot about deepfakes in the world of entertainment and social media, but what is the threat to enterprise?
Deepfakes manipulate trust implicit in voice and images, resulting in three large threats for enterprises. Firstly, deepfakes could be used to create misinformation to attack an executive, product or brand perception and destroy public confidence, which could flow naturally onto short selling stock manipulation for profit or be the basis for a hacktivist campaign. Secondly, payment fraud, where the traditional 11th hour CEO email requiring an immediate payment could be replaced with a synthesised phone or video call. And lastly, social engineering techniques, such as phishing, vishing or whaling, can be enhanced by deepfakes, coming from a known and trusted source and leading to a credential and network compromise, followed by a data breach, ransomware or any other attack.
Can you tell us how deepfakes are created, is it a sophisticated technology?
Deepfake technology is being developed by a wide range of groups with different motivations. Deepfakes are primarily informed by machine learning (ML) and artificial intelligence (AI). The technology ‘learns’ what a person looks or sounds like using multiple videos, images or voice clips from various angles and combines this with computer graphics to tweak the original as desired. The more access the technology has to relevant audio, image or video as a dataset to learn from, the better the quality fake produced. Consequently, politicians, executives and others often in the public eye are more exposed to this risk than others, but increasingly members of the general population are making more of their content available online. While most people immediately think of fake videos when they hear the term ‘deepfake’, the technology is being used in several different ways.
Biometric-based deepfakes are one of the most mature forms of the technology, in that there have been real world attacks against biometric systems using deepfakes. There is a typical pattern in the criminal underworld whereby new technology which has a clear financial reward always matures faster than other experimental technologies. Given the reliance on biometric systems as a validation of identity, we should anticipate more of these attacks and take steps to move away from especially vulnerable biometric identifiers.
Image-based deepfakes are a natural evolution from photoshopping, where AI is used to create much more seamless fake images. So far, this technology has been largely unsuccessful in fooling facial recognition systems, but it has had success in creating images for use in false news and political attacks. As misinformation grows and the technology to create these images matures, we should anticipate that this type of deepfake will accelerate and become more sophisticated.
Finally, video-based deepfakes are the most widely discussed category, generally used to discredit or implicate a target in things they had nothing to do with. While the technology is still quite immature, it is evolving fast, alongside the operational procedures behind its use. Large, long fake videos are still a little out of reach but using the technology to make subtle changes to a video works well at present. With the help of AI, these videos can be extremely convincing.
Are there any examples where deepfakes have been used to carry out financial fraud?
A notable case in 2019 saw attackers use biometric-based deepfake technology to imitate the voice of a chief executive in order to carry out financial fraud. A UK CEO believed he was speaking on the phone with his boss, recognising his accent and the melody of his voice. This is an example of sophisticated deepfake fraud that will continue as attackers eye the financial incentives. In this case, the criminals managed to con the business out of £200,000.
Presumably, deepfakes also represent a threat to governments and public administrations?
The possibilities of deepfakes offer a considerable amount of risk for governments. Recent history has shown a proliferation of attacks to both manipulate democratic elections and destabilise entire regions. The implication being that a deepfake appearing to come from a trusted authority could artificially enhance or destroy public confidence in a candidate, leader or perception of a public issue, such as Brexit, global warming or COVID-19 vaccinations, and influence an outcome beneficial to a malicious state or actor.
This is where an evolution of deepfake technology could become extremely dangerous as misinformation can destroy public confidence. It is certainly possible that deepfakes could become the basis of future disinformation campaigns.
Could you talk to explain how deepfakes are being used to spread disinformation and the dangers this could represent?
To explain this, it’s important to identify the difference between misinformation and disinformation. Misinformation is spreading false information, something humans naturally do all the time, whether they intend to or not. It’s easy to see something on social media that may not be true and tell a friend or family member, subsequently spreading misinformation.
Disinformation, on the other hand, is knowingly spreading information that is biased, misleading or false. Thinking about this from a government perspective, this could easily be used for political propaganda. What’s better for creating a scandal against your opponent than a video showing them doing exactly what your allegations call out?
In many cases, disinformation campaigns do not need to use sophisticated fake images or videos. Their target audience is almost always biased towards the narrative being pushed and they want to believe the material is real. Additionally, it only takes a video to circulate very briefly for immense harm to be done.
We have already seen the emergence of ‘disinformation as a service’, where cybercriminals sell disinformation services and craft advanced campaigns to help push the client’s particular agenda. As these campaigns increase and tools become more accessible, we should anticipate that deepfake technology could be used to enhance their objectives.
If you’re a CSO/CISO, what should you be doing to best guard against the threat of deepfakes?
Security teams will clearly struggle to technically identify deepfakes. As techniques improve, an arms race will ensue as malicious actors innovate to stay a step ahead, leveraging imbalance where cyber teams must defend all scenarios while an attacker needs to find only a single weak point to exploit. Likewise, as security teams innovate with new technology to identify deepfakes, techniques to circumvent this identification will proliferate and unfortunately serve to make deepfake creation more realistic and harder to detect over time.
For CSOs and CISOs, a strong security and compliance culture, backed up by well understood processes, should be implemented in order to combat deepfakes effectively in business. This can be helped by adopting the zero trust principle of ‘never trust, always verify’. Simply following a dual authorisation process to transfer money, or verifying instructions received with a well-known truth, such as calling someone’s direct line, can expose deepfake fraud. These processes are not new, and many organisations will already have them in place, with regulators already demanding them.
"We have seen how quickly misinformation can spread. The best defence right now is to teach members of the public to be wary of material without a trusted pedigree and to challenge things that make wild claims"
Another solution could be to use biometrics as proof of possession of a device and combine this with additional factors. These could be attributes like known behaviour, contextual information, and things that the authorised user alone would know, as well as a pin code or multi-factor authentication (MFA) on a phone. By creating additional layers of security, simply faking one aspect of an identity will not be enough.
Is this also an issue that the wider public needs to be better educated about and, if so, how can that be organised?
We have seen how quickly misinformation can spread. The best defence right now is to teach members of the public to be wary of material without a trusted pedigree and to challenge things that make wild claims. If we are all a little bit more sceptical, then the attackers looking to fool us will have a much harder job. Many organisations have a role to play here, from businesses and governments to schools and hospitals, and I believe the challenge will be creating a consistent and simple message for the public.
Over the longer-term, I expect this sort of technology to become well understood, if not commonplace within society. If everyday social apps included this sort of functionality, the public would naturally be much more wary of what they see online or in the media. Trust would naturally then have to be verified by a second factor between parties which, similar to businesses, may have the public using MFA or biometric input via mobile devices as a simple and scalable solution to authentication. This shift to a public zero trust mentality by not trusting video or voice media alone would reflect and follow the business frameworks in place now.
As well as education, what role does regulation have to play?
There are currently many existing and proposed pieces of legislation which can be applied to, or are specifically designed to, control the use of this technology. Copyright, fraud and privacy laws will apply in certain instances when breached, but do not combat the wider problems posed by deepfakes.
Specific legislation seeks to assert a measure of control by either proving authenticity of media, with digital watermarking or fingerprinting of data asserted as ‘true’, or requiring digitally adjusted or synthesised media to clearly label itself as such, known as digital accountability. This is the next logical step following on from social media platforms such as Twitter flagging untrusted, synthetic or manipulated media in posts. Potentially though, flagging trusted media is a more difficult burden than flagging untrusted media. While this may work for ‘official’ media, it would be challenged by the vast volume of publically generated content every day. In addition, such a formal assertion of digital truth would have to be enforced strictly by governments, or run the risk of being forged at scale.
Regarding existing controls, one of the challenges we face is that most of this legislation was not developed with the idea of covering deepfakes in mind. We will need to review the things that are in scope for legislation, such as GDPR, to make sure the right buckets of personal data are covered. For example, while GDPR does cover aspects like biometric data, it is more focussed on the storage than the use. Likewise, copyright legislation may cover things like video but for an entirely different purpose, and when the purpose differs, it can be hard to prosecute under this.
Determining and policing the true from the fake raises a new host of problems in itself. Who decides what is true? And how do we ensure the availability and integrity of the truth? As with so much regulation, variances in approach between countries can lead to confusion and difficulty in enforcement on a global scale, so working towards ‘global norms’ is incredibly important here, as with cybersecurity more generally. Policing social media is also a challenge for similar reasons. History has shown that different platforms will respond in different ways and at different speeds, even under common regulation, while misinformation may propagate much more rapidly.
We know that cyber criminals are constantly working to modify their threats and attacks, how do you expect the deepfake method to evolve?
There is a feedback loop with all emerging technologies like these: the more they generate success, the more that success is fed back into the technology, rapidly improving it and increasing its availability.
Much of the defence against deepfakes is going to come from research currently being actively driven by universities, security companies and think tanks. There is a significant amount of work taking place to expose deepfake videos, most aiming for the goal of releasing tools that can discord fake videos.
The best deepfake detector to emerge from the recent Facebook-led competition only caught about two-thirds of the 100,000 sample videos tested against. Even using AI and ML to develop deepfake detection over time immediately comes up against the same technology being used to develop deepfake algorithms to avoid detection. It’s a battle of AI versus AI.