Key to DevOps success: avoid lift and shift to cloud

GM Financial’s DevOps VP Matt McComas offers key lessons on taking a systemic approach, breaking process roadblocks, and measuring automation success.

Rebekah Taylor

March 23, 2022

While DevOps has brought speed, agility, and scalability to application and service delivery, it hasn’t always come easy.

It can be difficult to bring large, well-established enterprises around to a new way of thinking about development and IT infrastructure management. There are silos and outdated team structures to contend with. Processes that have been in place for years. The challenges are even more complex when you throw the cloud into the mix.

Just ask Matt McComas, Vice President of DevOps for GM Financial, who appeared on the Network Disrupted podcast in Episode 4 of Season 3. He shared insights from more than a decades’ worth of IT and DevOps leadership at the financial services arm of automotive behemoth General Motors with host and BlueCat Chief Strategy Officer Andrew Wertkin.

They covered how GM Financial has come to recognize that architecture and infrastructure must support DevOps, and how his team has overcome hefty process roadblocks. Further, they touched on how the company has benefited from avoiding a ‘lift and shift’ approach to cloud adoption and how they measure automation success.

Architecture and infrastructure must support DevOps

McComas recounts that GM Financial had a number of legacy platforms that were critical to the business but were also very fragile, manual, and difficult to manage. IT teams in the trenches were struggling to support them.

A lot of the awesomeness that you get out of DevOps and automation really requires the infrastructure and architecture to support it.

His DevOps team was formed to address it. Early on, he recalls, it was “excessively challenging” and often felt like swimming upstream. They quickly learned that they needed a systemic approach.

“You have to, kind of, think about the whole system itself, the architecture, the infrastructure,” he says. “Everything needs to be supportive of this DevOps activity that we’re trying to initiate.”

In the last year or two, he says, GM Financial has come around to understand that. As such, a number of platforms and services are undergoing re-architecture work that will change many core business functions.

The primary DevOps roadblock: process

McComas recounted a particular challenge for his team around implementing automation for server builds.

His team’s automation efforts had reduced the actual time for server provisioning from two to three weeks to a mere one to two hours. But end-to-end, server provisioning still took three weeks. The primary impediment: gathering the prerequisites now took longer than the actual work.

If DevOps is about people, process, and technology, McComas says the technology is the easy part.

‘A lot of people are married to the process’

“Process is more difficult because it really requires a lot of humility,” he says. “Because a lot of people are married to the process, especially people that have been around a while. They’ve always done it that way. Many people, kind of, get stuck in this rut, where they think that process can’t be changed. When in reality, there isn’t really a process that can’t be changed.

“A lot of times, if you just step back and look at it and start asking why you’re doing things the way you are, you find, well, it’s because we were trying to compensate for a mistake or some problem that happened years ago that nobody really knows about anymore,” he adds.

Recognizing process as a risk management activity

Wertkin asked McComas how he works through process roadblocks.

He says it begins with recognizing that process implementation primarily occurs to mitigate past problems and manage risk. In theory, as automation improves, it should also build a safety net. McComas recalls that it was up to his team to win over skeptics on other teams and demonstrate that they could employ effective risk management through automation.

Little by little, with pilot projects, they won over converts. In fact, McComas says, many came around to embrace automation even more because it provides full traceability for changes.

In the manual world, it was easy to fudge the system and hide change failure with reimplementations.

“Now, if you have a true change failure, you can’t really hide it,” McComas says. “It’s better from a change management standpoint. It’s better when you’re tracking things like MTTR [mean time to recovery] and some of the core DevOps metrics because now you have a change metric that’s truly accurate. It’s truly reflective of what happened because the people aren’t involved. Instead, it’s the automation that’s doing the work.”

Avoiding ‘lift and shift’ to the cloud

For cloud adoption, McComas notes that GM Financial made a conscious commitment to avoid a ‘lift and shift’ approach.

Instead, he says, the company has treated it as an application platform re-architecture project. It has been a three- to four-year effort to change all the core business platforms that handle auto financing.

“In essence, instead of moving platforms, we are redeploying them and refactoring them into Azure, and then deprecating the old,” McComas explains. “It’s a lot more difficult and time-consuming, but it’s also a much more sustainable approach.”

It’s greenfield at a company that has been around for three decades.

McComas points to the example of the automation tool Chef, which his team uses for configuration management. He notes that, for as great as Chef is, it’s difficult to implement in brownfield server fleets with hundreds or thousands of existing servers. Chef’s system doesn’t know much about them, nor do you have a good configuration management database for them.

In short, just like Avi Freedman noted, what’s often missing is a single source of truth.

“It’s really difficult in a brownfield enterprise to implement some types of automation because you almost have to redeploy everything to really get it right. It’s easy if it’s greenfield because you’re starting from scratch,” McComas says. “If you’re going out there and trying to automate things that are already there, then you’re not going to have a lot of information on them, most likely. It’s challenging for sure.”

Adoption as a measure of success in automation

Wertkin asked McComas to define how he measures success, as meaningful metrics for DevOps can often prove elusive. Or worse, Wertkin added, you decide on the wrong metric and then drive people to potentially do the wrong thing to meet it.

McComas says he tries to measure success by how many processes his team has automated. In particular, he adds, they measure how many processes they’ve made self-service so that other teams can serve themselves.

But it’s more than just how much his team has produced. It’s also about the extent to which others use those services.

“You can create a great service. It can be truly automated and truly self-service and truly innovative, but it’s not worth much if nobody uses it,” McComas says. “You have to not only create such a service, but you have to help people learn how to use it. So there’s very much an adoption side to this as well.”

Most importantly, McComas understands that automation transformation does not have a discrete endpoint.

“One thing I’ve learned is that transformation is a continuous improvement process. You’re never really finished. And I think our organization is learning that, too,” he says. “They had a transformation project, they wrapped it up, and then later they’re like, ‘Wait a minute! We still have a lot of work to do.’ I think now they’re finally figuring out there’s really not an end to transformation.”

To hear more of his insights, listen to Matt McComas’s full episode on the Network Disrupted podcast below.


Published in:


An avatar of the author

Rebekah Taylor is a former journalist turned freelance writer and editor who has been translating technical speak into prose for more than two decades. Her first job in the early 2000s was at a small start-up called VMware. She holds degrees from Cornell University and Columbia University’s Graduate School of Journalism.

Related content

Get fast, resilient, and flexible DDI management with Integrity 9.6

With Integrity 9.6, network admins can get support for new DNS record types, architect and configure multi-primary DNS, and automate IP assignments.

Read more

Deepen your security insight with Infrastructure Assurance 8.3

BlueCat Infrastructure Assurance 8.3, with an enhanced analytics dashboard, including interactive widgets and top 10 alerts, is now available.

Read more

Security, automation, cloud integration keys to DDI solution success

Only 40% of enterprises believe they are fully successful with their DDI solution. Learn how to find greater success with new research from EMA and BlueCat.

Read more

Our commitment to Micetro customers and product investment

From CEO Stephen Devito, a word on BlueCat’s ongoing commitment to supporting Micetro customers and Micetro’s evolution as a network management tool.

Read more

Seven reasons to rethink firewall monitoring and boost automation 

With BlueCat Infrastructure Assurance, you can better protect your network with automated alerts and suggested remedies for hidden issues in your firewalls.

Read more

Five ways to avert issues with BlueCat Infrastructure Assurance

By flagging and notifying you of hidden issues before they cause damage, you can go from reactive to proactive in your Integrity DDI environment.

Read more