3 Security Problems every Financial Institution need to address in Agentic AI World
Based on interview with Sunil Mallik, Head of Cybersecurity Architecture and Engineering at PayPal and former CISO at Discover
We sat down with Sunil to talk about agentic AI in financial services. What we got was a masterclass in how security actually works when you're protecting billions of dollars in transactions across mainframes, cloud infrastructure, and everything in between.
Every financial services company has plans to adopt agentic AI. But the speed varies wildly, and risk appetite is the deciding factor. There are still fundamental challenges without clear answers, and the organizations moving fastest aren't necessarily the ones who'll get it right.
The Day Everything Changed
Before we get to AI, Sunil told us a story that reshaped how he thinks about security. His company decided to test remote work readiness. They sent everyone home on a Thursday for a practice run. "No one came back for a year and a half," he said.
That moment reshaped financial security. Teams had to enable remote work at scale while securing it, and they were figuring it out in real time. New tools like Miro went from novelty to necessity. The line between internal and external networks blurred so much that Sunil now says, "Everything is external."
Then, just as security teams caught their breath, the AI boom hit.
Three Generations of Tech, One Security Team
Here's what makes financial services security uniquely hard: you're protecting three eras of technology at once.
Some of the largest financial companies still run mainframes. They have on-premises data centers. They have modern cloud infrastructure. And increasingly, they rely on third parties managing critical pieces of their ecosystem.
The challenge isn't just covering all these environments. It's maintaining consistent security controls across systems that were built decades apart. When a transaction flows from cloud to data center to a third-party processor, how do you maintain context? How do you ensure your controls don't break when you patch something?
And here's the kicker: mapping out attack paths in this kind of environment is brutal. You need to see what an adversary sees from the outside and trace how they could exploit vulnerabilities across multiple ecosystems. That's how you design monitoring and implement controls that actually work.
The Agentic AI Adoption Question
Every financial services company we talked to has plans to adopt agentic AI. But the speed varies wildly, and risk appetite is the deciding factor.
"There are still risks we don't have complete answers on," we heard. Explainability and bias can't be solved with technical controls alone. They live in the messy intersection of technology and business process.
When financial institutions evaluate AI, they're weighing three things: cost, value delivered, and risk. And they have to answer these questions in an environment where regulators, board members, and customers expect serious due diligence.
The Three Hard Problems
The security challenges of agentic AI break down into three areas: identity, context, and action.
Identity gets complicated fast. An agent has its own non-human identity, but it also derives identity and entitlements from the human it's working for. The question is: how do you ensure an agent stays within the boundaries of what that human is allowed to do?
In a multi-agent system where agents hand off work to other agents, this becomes even harder. You need to carry that identity and those constraints through every handoff.
Context is about tying agent actions back to human intent. What is the agent doing? Why? Who authorized it? As context changes, you need continuous validation. This is where zero trust principles become critical, not as a tool but as a strategy.
Action is the big one. It's the most important challenge, and it gets philosophical: Is the agent acting in the best interest of the human it represents?
"You have to prove that. That's always been expected from technology we've deployed."
This means full auditability. Every action an agent takes needs to be logged so you can spot deviations from expected behavior.
What the SOC of the Future Actually Looks Like
Right now, SOC analysts spend most of their time in what gets called "noise versus news." They're drowning in alerts, triaging false positives, dealing with the same types of incidents over and over. It's Groundhog Day.
Agentic AI changes that equation. Analysts will spend less time on repetitive work and more time on things that matter: improving controls, writing new detection logic, finding blind spots in coverage.
The quality of SOC outputs will improve. You'll get better fidelity in analyst actions and better feedback loops to the rest of your security organization. Every incident becomes an opportunity to improve the controls that should have prevented it.
But humans aren't going anywhere. You'll still need analysts to validate critical actions like isolating users or network segments. The goal isn't full automation. It's freeing up human intelligence for work that actually requires it.
"There will always be a need for the analyst," we heard. The environment keeps changing. New technologies emerge. Business processes create noise. The idea that controls will ever work perfectly is fantasy.
What Vendors Get Wrong
The conversation turned candid when we asked about friction between security vendors and financial institutions.
Vendors often misunderstand how procurement works in large financial companies. A senior leader might love your product, but that doesn't mean you'll close the deal quickly. You still have to go through architecture review, third-party risk management, security oversight. The bigger the organization, the longer it takes.
These processes exist for a reason. They manage risk. They satisfy regulators and auditors. They might frustrate everyone involved, but they're not going away.
The other issue is communication. Vendors are rightfully proud of what they've built. But that outside-in view doesn't always translate clearly to the team you're pitching. Organizations differ in structure, priorities, and what they consider urgent.
In cybersecurity, there's no absolute assurance, only reasonable assurance. Every team has a backlog. They're working on the highest impact items. Your product might solve a real problem, but if it's not their top priority, it's not their top priority.
The Fundamentals Still Hold
When we asked what advice security leaders should follow when integrating AI, the answer went back to basics.
Confidentiality, integrity, and availability. The CIA triad has been the foundation of cybersecurity for decades, and it still applies to AI. Most of the risks you've dealt with in other technologies apply here too.
But AI introduces unique risks that blur the line between technology and business. You need a different kind of shared responsibility model between technical teams and business owners.
Here's the non-negotiable part: when you integrate an agent into a customer-facing product, you're still responsible for ensuring that agent acts in the customer's best interest. The agent is working on their behalf. Their data needs to be protected. The integrity of their transactions needs to be maintained.
That's the foundation of customer trust, and you can't compromise on it.
There's also an amplification risk. If one human identity is compromised, and agents are acting on behalf of that identity, the problem propagates fast. It's not just one compromised account anymore. It's that account plus all the agent actions tied to it.
Why This Matters for SOC Teams
At Arambh Labs, we're building agentic AI specifically for security operations. This conversation reinforced something we already believed: the technology has to respect the fundamentals while solving the actual problems analysts face every day.
SOC teams don't need more alerts. They need to shift from noise to news. They need agents that understand identity, maintain context, and take actions that can be audited and validated. They need systems that amplify human intelligence instead of replacing it.
The future we discussed isn't about removing humans from the loop. It's about giving them back their time so they can do the work that actually requires human judgment.
Financial services will get there at different speeds based on their risk appetite and priorities. But the direction is clear. And the organizations that figure out how to balance innovation with customer trust will set the standard for everyone else.
Want to hear the full conversation? Stream the complete episode on our YouTube channel.
Ready to see how agentic AI can transform your SOC? Visit our website and book a demo.