NEW YORK -- As an early adopter of AI, Kevin Hearn, senior vice president and head of consumer bank development at Axos Bank, made one mistake: giving hundreds of people on his team access to the technology without a specific goal.
During a fireside chat at the AI Agent Conference, Hearn said he gave 300 employees access to an AI agent, which yielded 300 different results.
Some used the agent to write code, some used it to fix the code, others struggled to prompt the agent effectively, leading to inconsistent results in code quality.
So Hearn had to reevaluate how to proceed and ultimately decided to shrink his team of AI testers from 300 to about five to seven people focused on experimenting, testing and refining the AI agent.
“As people come to me with ideas, I may give them the autonomy to go chase it, or I’ll have that team specifically focus on it,” Hearn said in an interview. “The power of that team is that once they’ve solidified an agent in a particular area, meaning they’ve worked with all the consumers of that agent to put a corporate effect on it, we’re now able to perpetuate that consistently.”
Hearn’s strategy is an example of how enterprises are trying to ensure they do not miss out on the powerful technology of AI agents while also mitigating risks and keeping agents within a contained environment so that their use of it does not backfire and cause business mistakes. His strategy also indicates the balancing act enterprises must do when approaching the new technology.
“Agents aren’t traditional software,” said Matt DeBergalis, CEO and co-founder of Apollo GraphQL, in an interview at the conference. “On the one hand, everybody is banging on the table saying, ‘Go fast, go far, act like a startup.’ But on the other hand, this is the biggest data exfiltration threat to every enterprise.”
He said that while enterprises need to be able to experiment, they also need strong foundations in place to experiment in a measured way.
Internal Use Cases
For Axos, the opportunity AI promised was too great to pass up, so the company found that its approach to risk mitigation was to focus on using AI technology and AI agents internally first. The banking institution uses OutSystems Agent Workbench to create, deploy and manage its AI agents. It has used the technology to create internal business analyst agents, Scrum Master agents and engineering agents.
Hearn said that having a small, focused team working on experimenting with AI is key.
“It’s all coming through that kind of centralized team that ensures the governance is there,” he said, “Governance being that we are using it appropriately. We are not feeding information we should not be. It does not have access to the outside world.”

Like Axos Bank, the fintech company Netevia uses AI, including agentic AI, for internal processes such as customer service. However, it avoids risks by not integrating it into forward-facing applications.
“Part of the journey is to be able to understand how you thread slowly,” said Vlad Sadovskiy, CEO of Netevia, in an interview at the conference. “You cannot [mess] with people's money even though the technology is already available to others doing agentic payments, AI-to-AI payments. We are still about a year away from the actual people thinking of adoption.”
T-Mobile and Upwork
While some enterprises are more focused on internal use cases, others are building externally facing agents for consumers. At T-Mobile, AI helps solve customer service issues.
T-Mobile customers use the company’s AI-powered app, T-Life. The telecommunications company also places a heavy focus on managing potential risks, said Julianne Roberson, director of AI engineering at T-Mobile.
“We have observability on everything, so if something goes wrong, we see it,” Roberson said in an interview at the conference. “We try not to put things out if we don’t know if they’re going to work.”
Similarly, Upwork prioritizes risk mitigation by giving agents a contained environment in which to run.
“We built a lot of internal tech that provides the safety harness for all of this,” said Andrew Rabinovich, CTO and head of AI at Upwork, in an interview. “Every language model that’s run internally -- and they’re all custom-built -- they’re all passed through this trust system to avoid hallucination and prevent getting off the rails.”
He added that Upwork spent time demystifying AI agents for employees so that they understood how they worked.
“We spent a lot of time teaching and presenting to the whole company all the components of the technology so people get a better sense of it, what to do with it, and then people have an opportunity to interact with it and try to include it on their own as well,” Rabinovich said.
The containment strategy, where enterprises ensure the right governance and tools are in place before releasing agents more broadly, can be critical because it helps mitigate the risks associated with using AI and agentic AI tools.
“People see performance, mistake it for confidence, then they get FOMO and it is a mess. As soon as you get into FOMO mode, it is a big mess,” said Robert Blumofe, executive vice president and chief technology officer at Akamai, a cloud computing and security company. He said that organizations should use AI when nothing else works.
“Use AI for what AI is awesome at and not try to force it into everything,” he said.