Predictions: The Cybersecurity Challenges of AI
Our recently released predictions report includes a number of important considerations about the likely trajectory of cybercrime in the coming years, and the strategies and tactics that will evolve in response. Every year, the story is “Attackers are getting more sophisticated, and defenders have to keep up.” As we enter a new era of advanced AI technology, we identify some surprising wrinkles to that perennial trend.
Our Data + AI Predictions 2024 report focuses heavily on generative AI, because after a year of excitement, these technologies are only now starting to roll out in the enterprise in a meaningful way. The security implications of generative AI and large language models (LLMs) are considerable, and our report theorizes about them in detail. In particular, three predictions stand out in my mind.
1. AI will be a huge boon to cybercriminals before it becomes a help to security teams.
My most immediate concern is the effect that generative AI will have on the competitive playing field. Not among competing businesses within a given industry, but between security teams and the countless bad actors trying to penetrate our defenses. In the eternal game of one-upmanship, every time one side innovates, the other side adjusts. We develop defenses to their new attack vectors. They find ways around our new security measures. It’s a constant cycle that generative AI is going to significantly disrupt.
Legitimate businesses are limited by regulatory compliance, concern for their customer relationships, and other standard business concerns. But the bad guys don’t worry about any of that, so there’s a lot less friction when they adopt new technologies. Generative AI will be a great boon to security teams, which are always understaffed and overburdened, but it’ll take time to develop the right tools and get them into our hands. Meanwhile, the bad guys will be experimenting freely with AI-driven attack techniques. So we’ll likely see the downside of AI for some time before we can enjoy the upside.
2. Cyberattackers will continue to shift left.
Another security implication of more and better AI is that it changes attackers’ priorities. Attackers are moving to the left, meaning the beginning of the standard software lifecycle diagram.In other words, they’re targeting developers.
Because we’ve seen more and more machine learning and automation in production as part of the DevOps and DevSecOps movements, there’s less human error for criminals to capitalize on. Therefore, attackers are now looking for ways in through developer environments, because that’s where human mistakes can still be discovered and exploited. This shift will only escalate in the year(s) ahead. It’s harder for security teams to defend against such attacks, and it’s even more challenging to create baselines for acceptable development activity than for an automated, well-managed production environment. Development is by nature chaotic and experimental, so understanding what’s normal and abnormal in a development environment is very difficult. However, it's imperative that CISOs and security teams figure it out. This is where you throw everything—humans, machine learning, and AI—at understanding what suspicious behaviors look like to mitigate the risk.
3. As smart as AI is, keep your eye on the dumb attacks.
One of the most effective forms of cyberattack is the phishing email. Generally, these are not very sophisticated attacks. The truth is, they don’t have to be—send out a strange email with misspellings and a bad return address to a few thousand busy employees and someone’s bound to fall for it.
These attacks will never go away. But with generative AI, they might get smarter and more effective when gen AI tools start writing them, drawing from knowledge of your company’s business and perhaps samples of your executives’ writing or public appearances. And given that there is a well-established, clandestine industry around cybercrime tools, it’s not that hard to imagine someone maintaining an LLM-driven tool for such attacks, a kind of CrimeGPT.
Building a secure future
We will see many responses to the security implications of LLMs. We anticipate regulation, though regulatory bodies generally move much slower than cybercriminals. Enterprise security teams will take steps, from maintaining their own LLMs within the security perimeter and limiting employee use of external AI tools to increasing training and vigilance around phishing and deepfakes. And as security teams incorporate new AI tools, they’ll be able to accomplish more and react faster, despite the persistent shortage of human talent. AI is going to make each of our security analysts much more effective.
Security concerns will also further motivate companies to use a unified data platform and bring their applications within that environment. Bringing the work to the data means your data is more secure, because it doesn’t get copied and moved to an external application.
For security professionals, it’s always an exciting time. Technology doesn’t sit still, and there’s a certain thrill to recognizing new threats and devising improved defenses to counter them. If there’s one prediction that can’t miss for 2024, it’s that it’ll be an interesting year.
For more on cybersecurity, generative AI and more, read Snowflake’s Data + AI Predictions 2024 report.