Predictions 2025: AI as Cybersecurity Tool and Target
Though AI is (still) the hottest technology topic, it’s not the overriding issue for enterprise security in 2025. Advanced AI will open up new attack vectors and also deliver new tools for protecting an organization’s data. But the underlying challenge is the sheer quantity of data that overworked cybersecurity teams face as they try to answer basic questions such as, “Are we under attack?”
In “Snowflake AI + Data Predictions 2025,” I join a dozen experts and leaders to discuss the changes AI in particular will drive in the next few years, and from a security perspective, there’s good news and bad. AI is both a contributor to the problem — more data to secure, more attack surface — and a potential boon, providing tools to manage amounts of data that humans can’t grasp on their own. Among other things, our report calls out four imperatives for cybersecurity as this AI era advances.
Responding to data overload with a security data lake
Security professionals have to continually up their game to make sure that, from all the data at their disposal, they’re using the correct inputs to identify vulnerabilities and incidents. The security data lake will continue to win favor as a cost-effective way to pool large quantities of data from diverse sources. Within the security data lake, teams can bring machine learning and advanced analytics to bear. And by making it more affordable to hold on to more data longer, teams can run better forensics. Compared to the traditional security incident and event management tools, security data lakes are generally more flexible, scalable and cost effective. SIEMs are also better suited to AI solutions, and for all those reasons we expect the security data lake to ultimately replace the SIEM.
Understanding AI as an attack vector
Last year, we published an AI security framework that identifies 20 attack vectors against large language models and generative AI systems. In it, we discuss three layers of AI that can become an attack surface. We discussed the first, the core platform, in last year’s Predictions report. We noted that automation prevents misconfigurations at the production level, and developer environments become a comparatively softer target. But at this point, that infrastructure is also firming up quite well.
In the coming year, we expect to see the next layer, model operation, become a more common target. Security professionals will have to consider how the model is initially trained and how it incorporates new data in production. We’ll have to look at the lifecycle of the entire model, as well as the lifecycle of the data it’s fed. Security teams will have to standardize their approaches to new AI technologies to make sure they’re as secure as their general enterprise infrastructure.
A third layer of attack, which we expect to increase further down the line, is directly interacting with the AI to trick it into disclosing sensitive data that perhaps should not have been incorporated into the model. That’s why we see the emerging practice of data security posture management, which seeks to provide better visibility into the location, uses and security of data throughout an enterprise.
Understanding AI as a security enabler
Artificial intelligence will also provide new tools for protecting the enterprise, and security teams are already experimenting with early possibilities. An LLM topped with a generative AI interface makes it possible to ask questions in natural, human language about overall security posture or specific alerts and patterns. This security copilot experience will mature and become a more effective assistant to perpetually understaffed security teams. In particular, AI-powered tools will help more junior security professionals to quickly translate ideas into queries and analysis. This will reduce the time it takes to learn complex query logic — and to get answers to immediate security concerns.
In particular, the ability of AI systems to summarize security incidents is going to be a great advance. Imagine the AI telling you, “I saw a strange pattern in the data movement — this much data of this type normally doesn't get transferred at this time of the day, from this location.” That high-level description is much more helpful than a notification that says, effectively, “Go check out the VPN logs, the storage logs and your email logs — then connect the dots yourself.” Which is pretty much where we are today.
Advanced AI will help security teams both to understand anomalies as they are detected and to do forensic analysis after an event to fully understand what happened and how to prevent similar events. Ultimately, it will be used not just for incident analysis but for overall data security posture management, because an AI can analyze much more complex systems than existing tools or human operators can.
Keeping humans in the loop
It’s already common for security tools to automate responses to certain incidents, shutting down an attack faster than a human could act. In theory, gen AI could make more complex decisions and take bigger, more comprehensive actions. But that won’t happen for quite some time. Gen AI cybersecurity tools will require human judgment for final decisions, especially where ethical issues and more complex risk factors must be considered.
In total, I’d say the future for cybersecurity teams looks bright. There’s more work to do, and more data to protect, but that’s always true. As standards and new approaches are developed to better secure the AI-driven enterprise, it’s the new tools, the new ways to get our arms around our data and our security posture, that are the most exciting.
Read “Snowflake AI + Data Predictions 2025” for more on cybersecurity, software development and data infrastructure in the age of AI.