Why Your SOC Still Looks Like It's 2002 (And What AI Should Actually Fix)

We sat down with Anton Chuvakin, one of the most outspoken voices in security operations, to cut through the AI SOC hype. What we got was a reality check on why alert fatigue has plagued SOCs for 20 years, why the traditional Level 1/2/3 analyst model is broken, and what it actually takes to build security operations that work in 2025.

Anton doesn't mince words. He's watched companies build 2002-style SOCs in 2022, using blueprints from old whitepapers as if they were timeless fundamentals. And now he's watching vendors promise that AI will replace entire security teams. Spoiler: it won't.

The SOC Maturity Problem

The state of SOCs today is all over the map. On one end, you have modern detection and response operations built on engineering-first principles. These teams refuse to even call themselves SOCs because they don't think of themselves as operators, they're engineers. Think SRE or DevOps, but for security.

On the other end, you have organizations still running late 90s SOC models. Big monitors. Rigid shifts. Analysts sitting in chairs triaging alerts the same way they did two decades ago. They've got slightly better tools now, EDR instead of nothing, modern network detection instead of whatever they had back then. But the fundamental structure? Unchanged.

Some companies literally picked up a white paper from 2002 and built everything according to that blueprint. Nobody told them to check the date. They thought they were following fundamentals.

The problem is that when 30 AI SOC vendors show up promising to fix everything, most of them are trying to improve a 1970s NOC with 2025 technology. You can add AI to a broken model, but you're still working with a broken model.

Alert Fatigue Isn't the Problem

Alert fatigue has been the top complaint in security operations since 2005. If you took a time machine back then and asked a SOC analyst what their biggest problem was, they'd say alert fatigue. Fast forward to 2025, and we're saying the exact same thing.

That staying power is what makes it worth exploring. Because if we've been complaining about the same thing for 20 years, maybe we're looking at a symptom instead of the root cause.

Anton pointed out the real problem: we suck at detection.

That's why watching vendors rush to build AI agents for alert triage feels like missing the point. Faster triage is handy, but if your underlying detection is broken, doing triage faster just means you're processing garbage more efficiently.

Alerts don't just happen like weather. They come from rules someone wrote. And in many SOCs, there's zero relationship between the people who write those rules and the analysts dealing with the alerts. When analysts have never met the content engineers building detection logic, when they never talk about what makes alerts fire, there's almost no hope of fixing the problem.

That bridge between detection engineering and SOC operations is where the real work happens. Whether you call it content authoring or detection engineering, it's the same idea. You need that connection.

AI in SOC vs AI SOC

Here's where things get contentious. The phrase "AI SOC" is misleading marketing.

AI in a SOC? Yes. AI SOC? No.

There are real, functioning use cases for AI and agents in security operations. The problem is the expectation gap that sometimes emerges. When vendors sell these tools, decision-makers sometimes interpret "AI SOC" to mean they can replace their entire human security team with automation.

That's the disconnect. It's where the technology promise gets misunderstood.

The issue isn't always vendor marketing. Sometimes it's organizational pressure. Security operations are expensive. Leaders looking at budgets want to know if technology can reduce headcount. It's a reasonable business question, but it's the wrong frame for this technology.

When someone hears "AI SOC," the risk is they think it means eliminating the security team rather than amplifying what that team can do. That's not how this technology works, and it's not what it should be sold as.

The value isn't replacement. It's transformation of how the work gets done.

The Level 1/2/3 Problem

Traditional SOCs organize analysts into levels. Level 1 does basic triage. Level 2 handles escalations. Level 3 takes the hardest cases. It's a model borrowed from NOCs in the 1990s.

The question isn't whether AI can automate Level 1 work. The question is whether that's even the right frame anymore.

Netflix wrote about sockless detection back in 2018. Other companies followed. The whole idea is to dissolve those rigid levels and focus on skills instead of tiers. Do engineering-first designs. Build systems where humans focus on what actually requires human judgment.

Adding AI to a Level 1/2/3 structure just creates weird mental gymnastics. If AI replaces Level 1, what do Level 2 analysts do? Get alerts from the AI? So now the bottom layer is machines feeding humans, but if the machines generate bad alerts, who do you complain to?

It's easier to add AI to a more modern detection and response model than to patch it onto a structure that was outdated 15 years ago. You're not adding AI to a 1990s SOC. You're adding AI to a 1970s NOC. If that doesn't scare you into rethinking your approach, nothing will.

Building a SOC from Scratch in 2025

If you had to build security operations from the ground up today, what would you do?

Anton mentioned he's working on a paper with Deloitte that describes exactly this. Start with engineering-first principles. Base it on what's being called Autonomic Security Operations or sockless detection. Not because you're trying to be trendy, but because it's designed for how threats and technology actually work now.

Some organizations have done this by tearing down their existing SOC and rebuilding. Not gradual improvements. Full teardown and rebuild. It takes 2-3 years of work, but it works.

From day one, you'd build something AI-ready. That means data quality suitable for AI. Modern pipelines. Process maturity that accounts for the fact that AI makes mistakes. You'd use AI from the starting point, but you wouldn't call it "AI native" because that term has been ruined by marketing.

The goal isn't to ask which chairs Level 2 analysts should use or how handoffs should work. It's to build something designed for the future.

What Humans Will Actually Do

In a properly built detection and response operation, humans focus on a few specific things:

Decide what to build. This is fundamentally human. Look at available resources, assess risks, decide what's worth detecting. AI can offer advice, but humans make the call. Architectural analysis and design stay human-led.

Oversee how machines function. Validate that pipelines work. Figure out why volume is dropping. Catch when business does something that makes detection less effective. Ask why detection isn't performing as well as it should. Humans do this with AI help, but it's still human-led.

Top-shelf threat hunting. Hunting is, by definition, a hypothesis-led process. Gen AI helps generate hypotheses based on threat intel. But certain types of hunting for sophisticated attackers will remain human with tools, not just asking a machine to handle it.

Judgment calls when machines stumble. And they will stumble, because attackers want them to. The adversary gets a vote. When machines are uncertain, do the wrong thing, or get stuck, humans step in. Nobody wants an agent that was supposed to finish in 2 hours still deliberating with itself the next morning.

Risk acceptance decisions. These are going to stay human for a very long time. It's one of those decisions where delegating to a machine creates massive grief.

The Threat Hunting Reality

A lot of vendors claim they've built threat hunting agents. What they've actually built is something that generates hypotheses or runs detections.

Today, hunting teams face long lists of things to try. You might allocate a day to persistence mechanisms, another day to exfiltration methods. Run through everything you can think of. But these lists have gotten huge, and environments are complex.

The collaboration model makes sense: machines run the hypotheses, bring in data, do preliminary analysis. They tell you that out of 30 persistence checks, 27 produced nothing. Two look a little suspicious. One looks probably bad.

Then humans take over. The "probably bad" one turns out to be your development team using a slightly non-standard approach to inject stuff in memory. It matches threat actor behavior, but it's not a threat actor doing it. The machine was right that it looked suspicious. It just needed a human to provide context.

The two interesting ones need deeper investigation. Run more queries. Check other things. Either you discover an attacker, or you confirm you're clean and move on.

This is collaboration. The human role is very much there.

The Metrics Trap

You pay for AI SOC tools or build AI into your SOC because you expect something to be better. But if you don't measure what "better" means, how do you know?

A lot of people rush straight to speed metrics. MTTD, MTTR. They start chanting these abbreviations like mantras. Speed metrics are fine. You almost expect machines to be faster. But if you obsess about speed without looking at quality, effectiveness, or whether you actually achieved the result, you're going to lose.

It's much faster to just click "resolve" on every alert. You could write a dumb script to do it. Your speed metrics would look amazing. But you'd be 10 times worse at actually protecting anything.

If something is two times faster, two times cheaper, and 10 times worse, you're not saving money. Your speed metric looks good. Your cost metric looks good. But your actual security is in hell.

Balance speed metrics with coverage. Detection quality requires detection breadth. Are you detecting everything you need to detect? That means knowing what you need to detect first, then checking if you're actually catching it.

For critical threats, have multiple layers. EDR, logs, NDR, other tools. Then have a machine look at all of them and say, "Hey, I think NDR missed it and EDR missed it, but here in the log it's a little suspicious. Human, take a look."

That's a win for AI. It guided a human to the right spot.

What Actually Matters

The goal isn't to avoid AI or refuse to modernize. The goal is to be honest about what AI can and can't do, what humans should and shouldn't do, and what your security operations actually need.

If you have metrics you already track, adopting AI should make something measurably better. That sounds obvious, but it's not always how this plays out.

More importantly, don't let AI vendors sell you on replacing humans entirely. Don't let executives think they can fire the security team and run everything on automation. And definitely don't take a 2002 SOC model, bolt AI onto it, and expect transformation.

We believe the path forward starts from first principles. Build for engineering-led operations. Use AI where it makes sense. Keep humans in the loop for judgment, architecture, and the work that actually requires human intelligence. At Arambh Labs, we're building agentic AI that amplifies what security teams can do, not replaces them. The future of security operations isn't humanless. It's human intelligence freed from repetitive work and focused on what matters.


Want to hear the full conversation? Stream the complete episode on our YouTube channel.

Ready to see how agentic AI can transform your SOC? Visit our website and book a demo.

Read more