IT pros debate: Who should own DNS in the cloud?

Six networking pros dig into who should own DNS in the cloud during the third Critical Conversation on Critical Infrastructure hosted in Network VIP.


December 18, 2020

Critical Conversations on Critical Infrastructure Ep. 3: “Who should own DNS in the cloud?”

Core infrastructure services like DNS, DHCP, and IP address management have long been a source of contention within the enterprise.

Who should administer it? How should it be stewarded? Whose fault is it when something breaks?

The rise of the cloud has seemingly exacerbated that. And it has put two teams at odds for territory, credit, and blame: Those who traditionally work with on-prem networks, and those who deal in cloud-based networks.

We invited professionals from both sides of the issue to help us approach the question. Our hope was not only to answer “Who should own DNS in the cloud?” but to shed light on the ‘how’ aspect as well.

The discussion highlighted the erroneous implication made by the question in the first place. As it turns out, not only is there no hard-and-fast rule, it’s also not quite the right question to be asking.

Moderator: Andrew Wertkin [LinkedIn | Twitter], Chief Strategy Officer, BlueCat and host of Network Disrupted podcast

Panelist: Bart Castle [LinkedIn | Twitter], Cloud Technical Trainer, CBT Nuggets

Panelist: Karun Malik, CSM [LinkedIn], TPM – Site Reliability Engineering & DevOps, Loblaw Digital

Panelist: David Muscat [LinkedIn], Infrastructure Architect, IBM Watson

Panelist: Lazaro Pereira [LinkedIn | Twitter], Senior Staff Network Engineer, Abrigo

Panelist: Mohan Persaud [LinkedIn], Director of Data Network Engineering, financial services industry

Below are the highlights from BlueCat’s third installment of Critical Conversations on Critical Infrastructure. To continue the conversation, follow up with panelists, and see what others have said about the roundtable, join Network VIP—our Slack community where pros in IT connect and share their expertise on all things networking.

First, rethink your understanding of ‘ownership’

Moderator Wertkin raised the point that the responsibility of architecting and building infrastructure, and that of making changes to it, should no longer sit in the same group.

He said, “It’s now our job, as we build out these infrastructures, to ensure that we’re building infrastructures that fundamentally work, that communicate with each other, that can be changed, even provisioned and deployed, as part of that system by the person launching the applications or making the changes.

“I think part of what this world of cloud brings to the way things used to work on-premises, is this idea that somebody stands between you and the change you want to make and they’re going to add their expertise into that [vis á vis security, compliance, etc.]. The infrastructure teams need to build a DNS that works, that can be owned by the people that want to make changes to it.”

He didn’t believe that infrastructures ought to consist of a single technology stack, so long as interoperability is possible.

The case for democratizing DNS: speed, understanding

Loblaw Digital, the organization that Malik works at, had an issue where developers needed more changes made to infrastructure than the DevOps team could deliver themselves.

“The SRE DevOps team was expected to do a lot,” Malik recounted. “Then we started pushing into the dev-run model where we are doing the knowledge transfers to the devs, where they can do dev- and network-level things at the same time. Which is, like, a DevOps mindset, right? That the SRE team or the DevOps team cannot do everything for you.”

To accomplish this, DevOps started teaching developers about the best practices and fundamentals they need to make safe network changes. And DevOps also started learning more about what the developers needed most. The teams met in the middle to enable one another. The result was that Loblaw unlocked a level of speed only possible when those closest to application creation are empowered to efficiently deploy resources for themselves as they need them.

Do not assume everything is interoperable

Hybrid environments raise serious questions for those charged with full-system resilience, compliance, and visibility. When a team with a singular view into just one of the worlds makes decisions, chaos ensues. As Persaud described, in situations in which cloud teams went off and unilaterally made changes, “Your internal teams are scrambling because someone stood all this up. And now they have to like, ‘Oh, well, how do we do this? How do we resolve DNS? How do we make sure that we aren’t traversing the internet to get to this box that you’ve put up, that who knows what’s on it? How do we secure it all?’”

Castle explained, “A lot of senior-level management just see DNS as a host and an IP address—that’s it. They just assume, ‘Oh, it’s automatically going to be there. The cloud provider provides DNS. We’re just going to make it work.’

“They move on to the next topic of what we have to take care of and forget about things like DNS, and what we can and can’t do, and how things need to be routed. That just seems to be the going thing, of taking things for granted and not having the proper toolsets in place to help support that. Or, your current set of toolsets don’t match up with what’s in the cloud.”

Muscat said, “At the end of the day, when you’re dealing with a third-party cloud vendor, their responsibility is just to make sure DNS is up and it’s running, and it’s providing the service that is being purchased.

“However, the on-prem DNS team’s responsibility is to make sure that not only is that service being made operational. But it’s also following policies, guidelines, making sure all the securities are in place, making sure that you’re following any type of GDPR, FISMA.”

Do not underestimate your technical baggage

Many of the problems related to cloud migrations, Castle pointed out, seem to be tied not to issues with the cloud itself. Instead, they are tied to the “evolutionary conundrum that the organization has with some of the technical loopholes they built themselves into.”

In other words, according to Pereira, moving to the cloud can sometimes be an exercise in the avoidance of technical debt. And the implications are painful. He said, “At the same time as we have this awesome new technology with a lower barrier of entry, we still have the baggage from the past.”

Muscat added, “You’re going to worry about operationalizing it after. That’s the pain that comes back to the organization.”

The pain presents itself in myriad ways. Without central visibility and control, which fragmented hybrid environments do not enable, adhering to compliance standards and consistently applying security policy becomes nearly impossible. Resolution becomes a mess of haphazard forwarding rules.

Architecting hybrid systems requires engaging not only DevOps teams but, vitally, traditional on-prem teams as well. They devote themselves to staying afloat in the face of technical debt and see a lot of the important nuances that need to be architected around.

Include both sides in discussions regardless of who owns it

Castle said, “The woeful lack of information that I run into with clients when we start talking about this… We’re saying, ‘Okay, make the right selection across service providers or service models or deployment models.’ We look at the base of info that they’re making these decisions off of, and you’d be lucky if it’s a chorus of crickets and a smattering of reports.

“Cloud computing and the adventure to get there is a great way to measure your organization’s sophistication and knowledge about what they do well and what they do poorly. And you will see it rear its head every time you try to migrate to cloud services across any of the service models.”

Wertkin added, “Somebody always has what seemingly is an easy answer to the problem you have. But if that decision is made without really understanding the architecture you end up just creating more of an unholy mess that is harder and harder to maintain. I think that requires these teams working together and really trying to understand the requirements on both sides.”

Malik said, “It really falls under the people part of ‘people, process, tools.’ You have to be able to create a culture in your company and in your teams that doesn’t let that friction happen, or at the very least facilitates it. Right? You still need to make money at the end of the day, right? The goal for every company is to make money. Everybody should be behind that goal, everybody needs a paycheck, and that means that everybody has to learn to work together.”

Pereira added, “It’s actually a really hard question, who owns DNS in the cloud. I think Karun’s company’s probably got it pretty good. They’ve created a delineation that isn’t based on competition. They’re creating a delineation based on function. That’s probably a better fit and it doesn’t mean that somebody has to specifically own DNS in the cloud.

“They don’t have to be the winner. The only thing that really matters is successfully administering it. If you’re an on-prem guy and you have a compatriot in the cloud, you need to go make friends with them and you need to figure out how to bring your two sides together.”

That’s all for our Critical Conversations about Critical Infrastructure series in 2020, which has been full of helpful insight. Stayed tuned for more about our next conversation coming in February 2021. Until then, join the discussion in Network VIP on Slack.

Why do 72% of enterprises struggle to realize the full value of the cloud? This report by EMA breaks it down.

Published in:

An avatar of the author

BlueCat is the Adaptive DNS company. The company’s mission is to help organizations deliver reliable and secure network access from any location and any network environment. To do this, BlueCat re-imagined DNS. The result – Adaptive DNS – is a dynamic, open, secure, scalable, and automated DDI management platform that supports the most challenging digital transformation initiatives, like adoption of hybrid cloud and rapid application development.

Related content

Detect anomalies and CVE risks with Infrastructure Assurance 8.4 

The Infrastructure Assurance 8.4 release features an anomaly detection engine for outliers and a CVE analysis engine to uncover device vulnerabilities.

Read more

Get fast, resilient, and flexible DDI management with Integrity 9.6

With Integrity 9.6, network admins can get support for new DNS record types, architect and configure multi-primary DNS, and automate IP assignments.

Read more

Deepen your security insight with Infrastructure Assurance 8.3

BlueCat Infrastructure Assurance 8.3, with an enhanced analytics dashboard, including interactive widgets and top 10 alerts, is now available.

Read more

Security, automation, cloud integration keys to DDI solution success

Only 40% of enterprises believe they are fully successful with their DDI solution. Learn how to find greater success with new research from EMA and BlueCat.

Read more

Our commitment to Micetro customers and product investment

From CEO Stephen Devito, a word on BlueCat’s ongoing commitment to supporting Micetro customers and Micetro’s evolution as a network management tool.

Read more

Seven reasons to rethink firewall monitoring and boost automation 

With BlueCat Infrastructure Assurance, you can better protect your network with automated alerts and suggested remedies for hidden issues in your firewalls.

Read more