How the cloud has complicated network observability

Kentik co-founder and CEO Avi Freedman chats about the merits of network observability and how the cloud has changed the networking landscape.

Rebekah Taylor

March 4, 2022

Network teams need to measure application performance, user engagement, and other key metrics. But you simply can’t monitor networks the way you once did.

Today’s networks are complex, spanning from the data center to the cloud. As a result, effectively observing networks and obtaining network telemetry is equally complicated. (Network telemetry is the collection of real-time data from network devices like switches and routers into centralized locations for analysis.)

For the third episode of the third season of the Network Disrupted podcast, Avi Freedman, co-founder and CEO at Kentik, a network observability company, recently sat down with host and BlueCat Chief Strategy Officer Andrew Wertkin. Freedman also spent a decade as the VP of Network Infrastructure and Chief Network Scientist at Akamai Technologies.

They chatted about the challenges that Kentik’s customers face and the changes that the cloud has brought to the broader networking landscape. They also delved into the merits of observability and monitoring. Finally, they touched on how the need for good network observability and a single source of truth adds another layer to the complexity of cloud adoption.

Why do customers show up at Kentik’s door?

Often, Freedman says, his customers are struggling to monitor and manage numerous different kinds of networks.

There are the networks they run, which might include WAN, SD-WAN, a data center, and one or more cloud networks. And then there are the networks they don’t run, like the internet and SaaS applications. All of them can have significant effects on employee productivity, performance and agility, and revenue.

Ultimately, what his customers want is a solution to unify network management.

Furthermore, customers want to get a better handle on things beyond their border, like the internet. And they want to sync with systems of record. These include IP address management tools, Kubernetes, and configuration management databases.

The days where we could memorize all the IPv4 addresses and name machines Fred and Wilma and Jason are past.

Internet centricity is another common reason that customers show up at Kentik’s door, Freedman adds. Most existing systems are blind to the internet. Still, the internet is how you get to your SaaS applications. It’s how your employees and customers reach you, and where your content delivery networks sit.

Another on-ramp is an observability mandate and the need to bring the network into it. Still others come because they are migrating data centers for their hybrid cloud implementation and need to understand application performance.

How the cloud has changed the broader networking landscape

Freedman is certainly a fan of the move to infrastructure as code.

Networking has always been a land of change.

The challenge, he says, is that the way cloud networking works in Azure, AWS, and Google Cloud is all different. Furthermore, all three of those are different from how you run your data center. And even among data centers, there is variation. You may use a major vendor, a magic cluster technology, or run it yourself with SONiC.

“The good thing is it’s an opportunity for learning,” Freedman says, “The bad thing is—the wonderful thing about standards is there’s so many to choose from.”

Limiting networking architecture sprawl

Much like global architecture frameworks that Sandi Jones described, Kentik’s more successful customers pick a couple of standard architectures, whether in the cloud or on-premises. Those who tend to struggle more might have more than a dozen. Forget network telemetry—merely operating becomes too difficult.

Wertkin chimed in that the reasons for networking architecture sprawl are numerous. It might occur due to a lack of upfront planning, Or it might be because each line of business in an enterprise tried to solve their problem independently. Sometimes, it’s just that people want to use the next cool thing.

Whatever the reason, it can create real problems.

Cloud networking is often not configured by networking people

Another change that Freedman has observed is that networking in the cloud is often not turned on or configured by networking people. Instead, it might be cloud or API developers. While that’s great from an agility perspective, those people don’t have access to the network telemetry that they need.

It presents a unique cultural challenge to overcome.

“Now, anybody can go deploy a network,” Wertkin adds. “There needs to be architecture, there needs to be upfront planning. And I think there’s just too much discounting of the wise network engineer or architect that’s been around forever because he’s speaking a language we don’t know. It’s way easier in Amazon.”

The difference between observability and monitoring

When it comes to network telemetry, two terms that often get thrown around are observability and monitoring. But what’s the difference?

In the engineering sense, Freedman says, observability is the ability to infer what’s happening by the outputs of the thing being observed.

Meanwhile, monitoring has a lot to do with the things that you know go wrong and the things you know to look for.

“Being able to really do observability means I’ve got all these outputs and maybe the way I was looking at it doesn’t get at it,” Freedman says. “But me, the human, go in and do my diagnostics.

“We do proactive notification, baselining, learning over the data,” he continues, “But if you’re going to affect something that could take your business down, sometimes you want a human to look at it and say, ‘Yeah, I believe this. This is really the thing to do.’”

For that, he says, enterprises need an observability platform. With it, they can see a wide range of telemetry, keep a record of it, and execute on it. But it also lets humans interact with it.

However, it should not be something that only engineers can utilize. For an enterprise to get real value from these platforms, he says, a network operations center technician, a CFO, or a salesperson should be able to use it, too.

“Which means you do need, sometimes, just dashboards and maps and things like that,” he says. “And you also need the experts to be able to get in all the way down to the detailed data if they need it.”

Ultimately, observability and monitoring are synergistic. Enterprises can take approaches from each and combine them into what works best for their networks.

The need for observability in the cloud leads to workarounds

Many enterprises are at a point where they are realizing that they need network telemetry and observability tools for the cloud.

But Freedman notes that many network teams struggle to find a good solution.

Enterprises, especially corporate IT enterprises, aren’t resourced for building their own tools. So they opt for workarounds, he notes. They may use whatever platform their organization currently has, or they beg their traditional network appliance vendor to do it for them. Or perhaps they try to put together something on their own.

On the cloud-native side, Extended Berkeley Packet Filter (eBPF), a kernel technology that allows programs to run without having to change the source code or add modules, shows promise. But, Freedman notes, there’s still the internet underneath. And likely still an array of different enterprise network architectures to contend with.

Freedman’s hope, of course, is that network teams will find the observability platforms they need by partnering with Kentik or utilizing its open-source tools.

Ultimately, Freedman encourages enterprises moving to cloud networking to think about focusing on sources of truth. Part of observability, he adds, is the metadata about what is running and what should be running.

“It’s not just about networking and technology and terminology, but also being able to unify it with applications, users, customers,” Freedman says. “One of the biggest challenges we see with some sophisticated customers is, we say, ‘Look, we have the ability to take all that input—where’s the source of truth?’ And they just start laughing. And maybe, eventually, they finish at some point.”

To hear all of what Avi Freedman has to say, listen to his full episode on the Network Disrupted podcast below.


Published in:


An avatar of the author

Rebekah Taylor is a former journalist turned freelance writer and editor who has been translating technical speak into prose for more than two decades. Her first job in the early 2000s was at a small start-up called VMware. She holds degrees from Cornell University and Columbia University’s Graduate School of Journalism.

Related content

Get fast, resilient, and flexible DDI management with Integrity 9.6

With Integrity 9.6, network admins can get support for new DNS record types, architect and configure multi-primary DNS, and automate IP assignments.

Read more

Deepen your security insight with Infrastructure Assurance 8.3

BlueCat Infrastructure Assurance 8.3, with an enhanced analytics dashboard, including interactive widgets and top 10 alerts, is now available.

Read more

Security, automation, cloud integration keys to DDI solution success

Only 40% of enterprises believe they are fully successful with their DDI solution. Learn how to find greater success with new research from EMA and BlueCat.

Read more

Our commitment to Micetro customers and product investment

From CEO Stephen Devito, a word on BlueCat’s ongoing commitment to supporting Micetro customers and Micetro’s evolution as a network management tool.

Read more

Seven reasons to rethink firewall monitoring and boost automation 

With BlueCat Infrastructure Assurance, you can better protect your network with automated alerts and suggested remedies for hidden issues in your firewalls.

Read more

Five ways to avert issues with BlueCat Infrastructure Assurance

By flagging and notifying you of hidden issues before they cause damage, you can go from reactive to proactive in your Integrity DDI environment.

Read more