Why Runtime Security Is the New Perimeter

We sat down with Kathy Del Gesso, a CISO who's been in security leadership for years, to talk about what's actually changing right now. The conversation went broader than we expected. CISOs aren't just securing systems anymore. They're trying to figure out how to govern AI agents that act autonomously, manage thousands of machine identities, and make decisions that happen faster than humans can track.

Three Shifts That Matter

The security landscape is changing along three major axes.

Risk is concentrating in cloud runtime. This is where code, data, identities, and AI systems are actually executing. Static controls can't keep up anymore. The old model of securing perimeters and data at rest doesn't work when everything is dynamic and distributed.

Identity has become the new weak spot. We're not just talking about user accounts anymore. Organizations are dealing with thousands of machine identities, tokens, and AI agents making decisions automatically. The attack surface has expanded from people to include all these non-human entities operating at machine speed.

CISOs are being asked to shift from blockers to growth enablers. Executives want security teams to be part of building customer trust and winning deals, not just the department that says no. This is a fundamental reframing of the security function's relationship to the business.

The Runtime Security Problem

When AI agents start taking actions in your environment, you're dealing with a new category of risk: unintended behavior at machine speed.

These aren't classic malicious attacks. They're accidental misuse happening very quickly. Agents chain APIs in unexpected ways. They escalate privileges through automation. They spin up workloads faster than you can monitor them.

The challenge isn't securing a network perimeter anymore. You're not thinking about controlled entry points or static data storage. There are multiple ways into cloud runtime, and traditional security models weren't built for this level of dynamism.

Kathy pointed out that agents will take actions we didn't explicitly approve. That's both the promise and the problem. The promise is automation and efficiency. The problem is losing visibility into what's actually happening and why.

Threat Patterns Most Teams Aren't Accounting For

The obvious threats are getting attention. API keys, service accounts, non-human identities, these are on most security teams' radar now.

What's less obvious is how unprepared SOCs are for this shift. Most teams haven't adapted their detection logic to identify these new patterns. They're still looking for traditional indicators of compromise, not unintended agent behavior or automated privilege escalation.

SOCs need new signals, better context, and updated playbooks. They're drowning in data from various tools, alerts, and logs. Correlating that context was already hard. Add in the non-human component, and it becomes exponentially harder.

The defender versus attacker dynamic is evolving into what Kathy called "Spy vs. Spy," the old cartoon where each side tries to outwit the other. Attackers are adapting fast. They're using AI for better phishing, faster reconnaissance, more personalized social engineering that's harder to spot.

But the real shift is toward the cloud control plane. API keys, service accounts, model endpoints. These are becoming the focus because traditional entry points are getting harder as organizations deploy AI-powered defense tools. So attackers are going after places where AI behaves unpredictably: model manipulation, prompt injection, influencing automated decision making.

What AI Actually Fixes in the SOC

AI is already helping with high-volume, repetitive work. Alert triage, noise reduction, log correlation, incident summaries. That first draft investigation report that sometimes doesn't happen as quickly as you'd like.

AI connects signals faster than humans can. It makes Tier 1 work more efficient. Some companies have fully automated Tier 1 with AI. A few advanced organizations have even automated Tier 2.

But humans are still needed for judgment calls. Understanding intent and business context. Making decisions about whether something is a real issue or not. Aggregating risk to determine if it needs to be addressed.

The conversation we had with Anton last week touched on the same point: alerts are a byproduct of detection rules. Unless you tie AI back to detection engineering, you're just processing garbage faster. The real opportunity is in adaptive defense. Proactive threat hunting in your environment. Writing adaptive detection rules based on what you're seeing. Building and testing those rules in simulated environments before deploying them.

There will always be a line between what AI can decide and what requires human judgment. High-stakes decisions stay with humans. Low-stakes decisions can potentially be automated. The nuance is in determining what counts as high-stakes versus low-stakes, and that's something you figure out working with the teams where this gets deployed.

The CISO Role Is Expanding

AI isn't just everywhere in marketing speak. It's actually reshaping what CISOs do day to day.

Kathy mentioned the role is expanding from protecting systems to securing decision making. When AI agents can act across cloud environments, modifying infrastructure, spinning up workloads, moving data autonomously, you're no longer guarding perimeters. You're governing behavior.

The CISO's responsibility now extends to both human and machine actors. They're being pulled into product strategy, engineering decisions, and leadership conversations. Because AI isn't just another tool. It's part of the operating model.

This creates a hybrid role. You're still a security leader, but you're increasingly focused on risk management and AI governance. And as activity shifts to cloud runtime, new questions emerge about how traditional security roles interact with platform engineering and other functions in this new landscape.

Organizations might need to rearchitect their team structures to follow the technology. That's still being figured out.

The Build Versus Buy Trap

AI coding tools have made it easier to build things in-house. Someone can spin up the equivalent of DocuSign in a weekend now. That changes the build versus buy equation, but not in the way people think.

There are a few factors to consider. First is speed versus depth. Buying gets you capabilities fast, which matters when you needed something yesterday. Building lets you dig deeper and tailor to your specific architecture and risk profile.

Then there's talent. Do you have the in-house expertise to build AI systems? More importantly, do you have the resources to maintain them long-term? Most teams underestimate the upkeep. The person who built it might not be there in a couple years. Tech has high turnover. Priorities change. The product needs to evolve over time to keep meeting your needs.

Organizations often misjudge this decision. They underestimate the long-term cost and complexity of building. They think of it as a one-time feature. But with AI, you need constant tuning, monitoring, retraining, and support. Then there's integration with your tech stack and the ongoing talent requirements.

The result of getting this wrong is projects that take longer, cost more, and deliver less value than buying a mature solution would have.

That said, if you have an expert team in a specific area and feel confident building something, it might make sense. Leave the things you don't have deep expertise in to people who do that work every day.

There's also a spectrum here. Companies like Google and Meta have the engineering power to build most things internally, and often prefer to. But most companies don't have that luxury. It really depends on company culture and risk tolerance.

What's Coming in the Next 3 to 5 Years

The most underestimated shift is the level of autonomy AI systems will have in day-to-day operations.

AI agents won't just assist. They'll make changes to cloud environments, move data, deploy code, and interact across systems automatically. They'll interact with other agents. This shifts the entire risk model from human-driven to machine-driven.

The risk isn't just more attacks. It's also losing visibility into why an AI system acted a particular way. Not having the guardrails to control it.

Teams need to prepare for a new level of autonomy, oversight, and monitoring of AI systems. Some companies are more advanced than others, but as a whole, we're still figuring this out.

The Palo Alto acquisition of Chronosphere seems timely in this context. Observability of everything agents can do and potentially will do becomes critical. It's a missing piece that companies are scrambling to solve, and you're seeing consolidation around it. Larger companies want to stay competitive, and they're acquiring capabilities rather than building them

Where This Leaves Security Teams

The goal isn't to resist these changes. The goal is to understand your unique environment, find the right tools for visibility, and recognize where you need to grow your security controls.

Nobody has a 100% foolproof answer yet. CISOs are learning from each other in forums and peer discussions. The field is evolving too fast for certainty. But recognizing where you're vulnerable and understanding what's in your control versus what cloud providers manage is a good start.

At Arambh Labs, we built specialized AI agent swarms for exactly this problem. Our platform covers alert triage, threat hunting, and adaptive detection engineering. The goal is simple: give SOC teams visibility and control as their environments fill up with autonomous AI systems. Security analysts shouldn't spend their time on repetitive work. They should focus on the high-stakes decisions that actually need human judgment. That's what we're building for.

Curious how CISOs are navigating AI governance? Watch the full conversation with Kathy on our YouTube channel.

If you're ready to see how specialized AI agents can handle runtime security challenges in your environment, visit our website and schedule a demo with our team.

Read more