TNS
VOXPOP
As a JavaScript developer, what non-React tools do you use most often?
Angular
0%
Astro
0%
Svelte
0%
Vue.js
0%
Other
0%
I only use React
0%
I don't use JavaScript
0%
NEW! Try Stackie AI
AI Engineering / Developer tools / Platform Engineering

Capital One deprecated an AI tool it once championed. Its DevEx chief says that’s the point.

With constant view of a new “destination state,” Capital One’s SVP of developer experience talks about how to be cloud-first, tech-first, and developer-first.
Mar 18th, 2026 8:02am by
Featued image for: Capital One deprecated an AI tool it once championed. Its DevEx chief says that’s the point.
Imkara Visual for Unsplash+

How could you possibly ensure you are providing the right tools and processes to 14,000 engineers? How do you even know if your engineers can recognize what being set up for success looks like?

For Catherine McGarvey, senior vice president of developer experience (DevEx) at Capital One, it’s about focusing on enablement of those 14,000 technologists, the majority of whom are software engineers, as well as platform engineers, site reliability engineers, and data scientists.

“I used to just say ‘productivity’ because productivity is the throughput ability to move it forward. Enablement means that they have the knowledge as well, in addition to being productive,” she tells The New Stack. “We care a lot about: ‘Are they given the tools they need to be successful? Do they know if they are being successful? And, how do we increase their enablement?’”

Like other “accidental tech companies,” Capital One combines rigorous, cutting-edge yet highly regulated technology with job stability and a builder culture. Here’s what McGarvey tells The New Stack about how AI is affecting that enablement.

Platform engineering at Capital One

Among other things, the Capital One developer experience team acts as the steward of the internal developer platform.

A la Team Topologies, the team is focused on where in the tech stack to build this scaffolding, or, as McGarvey puts it: How much should an engineer have to know about the internals of their cloud environment to be successful?

“One of the things we’ve been thinking about is exploring what you have to define in your deployment and what can be defined for you,” she says. “The more you centralize that, the more that becomes a central [solution] with platform engineering, rather than each team customizing their deployment.”

As the platform team looks to standardize an approach around best practices, she says the next question becomes: Does everyone need to do that best practice, or can you do it for them?

“If I were starting today, would I still build it this way?” McGarvey says, of the organization’s focus on “destination state,” instead of “I’ve invested in this so long, and therefore I should continue.”

Catherine McGarvey, senior vice president of developer experience (DevEx) at Capital One, smiling at Camera

Catherine McGarvey, senior vice president of developer experience (DevEx) at Capital One.

With this growth mindset, the company has already invested heavily in standardization across cloud-first tech stacks, in consistent deployment processes, and in centralized services and automation.

By the time McGarvey joined the major financial services corporation about 18 months ago, it was already standardized around a CI/CD platform and in the cloud. Her remit became answering:

  • How do we create more leverage for developers?
  • Is there more we can automate for developers?
  • How do we increase the overall continuous deployment of teams?

Unsurprisingly, AI tooling became a big part of these answers.

Early AI adoption, clear criteria

Capital One DevEx aims to make it easy for developers to use the latest AI tooling as early as possible. Not from day one — it is a well-regulated financial institution, after all — but her team gets to work on what she calls “a fairly rigorous but very efficient process” as soon as interesting tools are released.

This included an early push for coding assistance and now for agentic tooling, for which they’ve already run several proofs of concept grounded in success criteria and analyses around:

  • What is the expected behavior versus the reality?
  • What does the AI tool provide?
  • What is it missing?
  • Can it handle the Capital One style of doing things?

These proofs of concept are usually run by AI engineers tasked with testing new AI tooling and building or adapting it within. Not wanting a return to tool sprawl, these early adopters are tasked with determining how most engineers at Capital One would benefit from, or be hindered by, AI adoption. Once a new AI tool is selected, anyone at the distinguished engineer level or above is invited to training.

AI engineers are also expected to regularly update the tooling, especially as new AI and agentic tools emerge.

“We had one tool we’d rolled out that we thought was great that, a year later, we ended up decommissioning,” McGarvey says. “We have this mindset of, it doesn’t matter how far we’ve come, if it’s the wrong call or we learn something new that changes our behavior, we can [change].”

Capital One runs monthly developer surveys and is heavily instrumented, performing weekly reviews of tooling usage. As a result, the DevEx team discovered an increase in pull requests that had been sitting in backlogs for abnormally long periods.

This prompted a specific user survey. Engineers reported that they liked the auto-assigned tickets much less than the ones they created or assigned themselves, which led to the deprecation of that AI tool.

AI rollout: weeks, not months

As observed in The New Stack ebook, AI for the Enterprise: The playbook for developing and scaling your AI strategy, it is essential that AI buy-in be well-communicated as a top-down strategy — never a mandate — as AI success hinges on guardrails and cross-organizational integration.

“We’re going to teach you, we’re going to enable you,” McGarvey says. “Now, if you resist it, that’s going to show up in some way. At some point — this is less about the tool and more about your throughput or whatever you’re trying to achieve — your results show up.”

Instead of mandating tool usage, McGarvey says, “We’re focused on how do we lean forward with the group and then try to catch the gaps. That’s creating a bit more of a safety mindset with teams where they’re going to play around,” and creating space for teams to leverage AI tools and rethink how they work.

Following this path, she estimated that it would take a couple of weeks to roll out an AI tool across Capital One’s 14,000 tech workers.

This isn’t just AI-generated code either. AI is heavily involved in documentation writing, but McGarvey emphasizes, “You get a lot of lift there, but you still need [human-in-the-loop] judgment, you still need an understanding of the points you were trying to make.”

The destination state for docs is still being worked out, including whether documents are for humans or meant to be queried as part of a chat overlay.

Since none of the DevEx team’s work affects external customers, developer tooling is also the perfect place to start when adopting agentic AI. Capital One is exploring agents for test writing, with an opportunity to become a more test-driven company. It’s also exploring how to apply agents for upgrades and bug fixes.

While still committed to humans involved in pull requests, McGarvey says they are looking at AI tools to impact the size of PRs and the quality of design. AI is also known for its ability to serve as a great first-round code review.

“Code reviews is a really interesting one to standardize on and make sure you’re getting some automated feedback before it’s going to a person for that review,” she explains.

As for a fleet of agents working in cadence, Capital One remains in exploration mode because, currently, the quality, reliability, and security is not in place to let agents act on their own. Before even considering this at scale, McGarvey says they will ensure all safety gates and guardrails are centralized and in place. The organization is also very restrictive about what developers — whether human or agentic — can do locally on machines.

Success metrics for engineering teams

Like most tech organizations, Capital One has adopted objectives and key results, or OKRs.

“Because my focus is on dev experience, not shipping and delivering a product to a customer, mine are business goals, but they’re business goals for enabling devs to be more productive,” McGarvey explains.

“Like anything, it’s a tool. It all depends on how you use it and how clear you bring people along with the purpose behind it. And I believe objectives and key results really help you learn. So the value isn’t the metric, it’s the learning that the metric helped you discover.”

One key result the developer experience team focuses on right now is the number of vulnerabilities per container running in production. The team has written service-level agreements for the time to resolve these vulnerabilities, based on severity. There’s another key result: Fewer teams have to patch and address known vulnerabilities.

“It results in fewer fire alarms that can happen with a vulnerability that’s discovered later,” she says, with a big push to also decrease overall developer time spent on vulnerabilities.

With these key results in mind, the DevEx team and the application teams it serves have considerable autonomy in how they address vulnerabilities. This could mean standardizing on languages and frameworks, or choosing which third-party libraries to use or not. It could also be centralizing who does the build and deployment.

“It really is the business value of decreasing risk and increasing security, and keeping developers focused on more interesting things,” McGarvey says of the objective these KRs are tied to. “Of course, if you measure the thing, and you focus on it, and you talk about your learnings, you tend to improve it.”

Again, this is where AI comes in as part of the solution, she continues, “in utilizing some deterministic systems and creating lift and running automation of it.”

This OKR is also interesting, she says, because as the DevEx team adopts new scanning tools, there’s new activity-based tracking available, which leads to new key results, such as switching to a learner-based image for all containers.

Plus, adoption metrics are a consistent goal of platform engineering and other developer experience teams.

“Adoption is a wonderful interim measure to get you to that outcome-based measure,” McGarvey says.

For the last few years, Capital One has also conducted annual Voice of the Engineer (VOTE) developer surveys, consisting of 30 questions to benchmark the developer experience. They include questions like:

  • How do you rate X in the software delivery lifecycle?
  • How do you feel about your tooling?
  • How often are you talking about test flakiness as a team?
  • How often does your team talk about velocity?
  • How much time do you spend coding?
  • How much time are you spending writing tests after the fact?

It’s a practice that other enterprises with mature DevEx programs, such as Netflix, hold.

Any developer survey is only as good as the number of devs who respond to it — and then how a DevEx team acts on those responses.

McGarvey’s team both incentivizes participation up front with prizes and follows up with a rigorous strategy rollout when the CEO shares survey data. Twice a month, her team also releases comms about what they’ve improved.

She says, “The actual impact is that the thing they told us that was frustrating gets better.”

Group Created with Sketch.
TNS DAILY NEWSLETTER Receive a free roundup of the most recent TNS articles in your inbox each day.