The threat landscape has unequivocally evolved with the advent of AI-based cyberattacks. With bad actors having access to generative AI (GenAI) to create more personalized and bespoke attacks – and at a greater speed than ever before – cyber risk management must evolve using AI to change the tides.
The evolution of AI-powered attacks is something that a lot of organizations have already experienced in the form of deepfakes. Just last year, threat actors leveraged AI to deepfake Zscaler CEO Jay Chaudry’s voice in an attempt to trick employees into acting on their behalf. The threat actors called an employee with a recorded message, explaining that they – as Jay – had poor network quality and asked to communicate over text. This was followed by an urgent request for help moving money to a bank overseas. While the employee noticed the phishing attempt and reported it immediately, you can understand why individuals fall into these traps when you have credibility of a CEO being leveraged by voice and text message.
But how are threat actors wielding AI against organizations? And, more importantly, how can IT teams fight back against this new evolution of digital warfare?
AI attacks just got personal
A single prompt from a threat actor can get a rogue GenAI to scan the vast web for a particular company’s publicly exposed vulnerable devices and steal their credentials or scrape the slew of social networks to identify a compromised employee persona, with the final goal of reaching AI/ML developer and production environments to steal sensitive data.
Alternatively, the rogue GPT could be instructed to search for employees on LinkedIn and then to use a relevant persona (i.e. a finance team member who won’t have access to the AI/ML developer and production environment) to create a spear phishing email which targets AI/ML employees with the intent of tricking them into downloading a malicious document onto their machines. Successfully managed, the GPT will then install malware onto that machine and move laterally throughout the system, performing privilege escalation and lateral propagation to infect the entire development and ML environment.
Where once these types of threat operations – whether carried out through search or via a carefully plotted phishing attempt – would have been a huge manual lift, with the use of GenAI a couple of well-worded sentences is all it takes to get a threat actor ready to attack.
Stopping AI in its tracks
In order to improve their defenses against these new types of sophisticated GenAI attacks, organizations must take steps to minimize their external attack surface. Eliminating inbound VPNs for remote access is an easy first step – but this should go along with a complete security architecture overhaul by the implementation of a zero trust approach to effectively hide the attack surface itself.
The main danger of an AI-based attack is how quickly it can move through an organization’s system to steal data or corrupt different elements of the business. Lateral movement like this is the difference between a point incident and an organization-wide escalation. However, if an organization builds its defenses with zero trust in mind, and based on the principle of the least privileged access, they will be able to gain granular access control of individual applications. This microsegmentation is significantly decreasing the likelihood of lateral movement through the IT environment.
Through user-to-app segmentation, any critical apps belonging to high-risk user groups are separated from general network access to avoid widespread infrastructure contamination through risky parties, whether remote or in the office. Lastly, as a final layer of defense for lateral movement, IT teams should position honeypots in the system to attract and trap the AI intruder and enable their detection in the system.
For many organizations, AI is also seen as a potential benefit to their business and various divisions may be attempting to create their own applications. However, these private AI development applications are a goldmine for bad actors from a data loss prevention perspective due to the large amount of sensitive information they often contain. Securing such applications – and ensuring proprietary data is only used in a controlled environment – is therefore an equally critical consideration as organizations plan their use as a means of gaining business advantage.
Harnessing AI for protection
Attempting to ignore GenAI as a cyberthreat or digital transformation accelerator will only cause a business greater harm in the long-term. Indeed, instead of blocking AI applications altogether, organizations need to evolve their security procedures to allow employees to reap the benefits of LLMs, while being safe in the knowledge that their IP and data is safe. And this is where AI can work as a double agent in the cyber fight.
Putting this another way, the impact of generative AI on zero trust is actually bidirectional. AI can support IT teams in monitoring environment activity in far greater detail than they are able to do currently. Based on what it is seeing, AI can then tell the IT or security admin to implement specific policies that restrict user access to certain applications based on the applications that individual or individuals in the same group typically use. Additionally, AI could also provide a prediction on the business impact of enabling a security policy, determining whether the policy may actively damage the productivity of staff, for example. This could all be done by utilizing one AI module, which could empower the IT teams to make policy implementations as efficiently and securely as possible, without causing major implications for the wider business.
In the future, we’ll see increased use of generative AI to help organizations through their zero trust transformation, and, subsequently, zero trust platforms will help with secure enterprise AI adoption.
Levelling the playing field
The world is going through an AI revolution, where data is gold and organizations should be laser-focused on reducing the visibility of their data to LLMs. As the adversaries use AI to launch sophisticated attacks, IT teams must use AI-powered zero trust segmentation – using AI to fight AI, and level the playing field with attackers.
Claudionor Coelho
Claudionor Coelho is Chief AI Officer at Zscaler.