My Automation Journey Part 4: Data Models and Templates

John Capobianco

May 6, 2019

BlueCat invited John Capobianco, author of “Automate Your Network: Introducing the Modern Approach to Enterprise Network Management” to walk us through his journey of network automation. From the planning phase to deployment up the stack, John will cover the tradeoffs and critical decision that every network automation project should address – including the role of DNS. John’s opinions are solely his own and do not express the views or opinions of his employer.

Part 1:  Frameworks and Goals

Part 2:  Ansible and Initial Successes

Part 3:  Modernizing the Development Toolkit

Early tactical success

Welcome back! I hope you have enjoyed the series so far. Up to this point, we’ve focused on automating one-time tactical changes to the network.  Now I’d like to turn towards fully automated configuration management.

For the first few automated changes, I used Ansible to execute a series of orchestrated commands across multiple platforms (including pre-change and post-change state information). While these tactical changes were successful, I discovered an even more powerful strategic automation capability. After modernizing my toolkit to include VS Code, TFS, Git, and Ansible, we were ready to go a step further and embrace fully automated configuration management.

Infrastructure as code

Thinking about IT infrastructure as code allowed me to see the network as a series of commands on each device that result in the desired flow of data when combined correctly. In the start of an automation drive, the easiest way to derive these commands is to reverse engineer them from existing hardware configurations.

Most well-designed networks have district core, distribution, and access layers. Across any one of these layers, the commands themselves are often identical on each device (with minor variations). Re-using the host file discussed in part one of this series (where devices are grouped together by function or platform in a static or dynamically created host file) we can begin to create data models and abstract important information from device configurations.

Data models

Data models are human-readable YAML files containing structured data important to a device or group of devices. Using data models transforms the network to be intent-based, as the network developer’s intent is expressed as single, easy to read file. The configuration commands required to deploy a device are abstracted, and become irrelevant to a network operator. Ultimately, these will be derived from a template.  Data models become the basis of all automatically derived configurations and documentation files.

Group Variables

Any group of devices in the host.ini file identified by the square brackets, for example [Distribution Layer], can have common variables referenced as group_vars. Think of these as variables (data) common to all devices in a group. Since sub-groups can be nested inside and inherit variables from parent groups, you can define enterprise wide variables that apply to all devices. While the network quickly transforms to an intent-base code repository, the added benefits of standardization and best practices at scale become easily achievable.

As an abstract example, you might have the following group_vars YAML file called ENTERPRISE.yaml containing all of the common standard configuration variables enterprise network devices should use:

ENTERPRISE.yaml

—-

enterprise_defaults:
  primary_ntp_server: 10.0.0.1
  secondary_ntp_server: 10.0.0.2
  domain_name: automateyournetwork.ca
  native_vlan: 99
 
enterprise_dhcp_servers:

  lab:
    – 192.168.1.1
    – 192.168.1.2
  prod:
    – 10.1.1.1
    – 10.1.1.2

Host Variables

Much like group_vars, the host_vars, or host variables, are also part of the data model. Each individual device, for example DISTRIBUTION01, in the hosts file, has their own YAML data model. DISTRIBUTION01.yaml contains variables specific to that individual device on the network. The combination of the group and host variables form the intent for any given individual device. Host variables could be characteristics like a device hostname, management IP address, VLANs configured on the device, or other unique values.

A sample group of host variables for DISTRIBUTION.yaml:

DISTRIBUTION.yaml

—-

host_defaults:
  host_name: DISTRIBUTION01

host_vrfs:

  global:

    tag: 1

    message_digest: true

    stub: true

    networks:

      “1”:

        value:

        – “192.168.1.0 0.0.0.255”

  BLUE_Zone:

    tag: 10

    message_digest: true

    stub: true

    networks:

      “10”:

        value:

        – “10.10.10.0 0.0.0.255”

  RED_Zone:

    tag: 40

    message_digest: true

    stub: true

    networks:

      “20”:

        value:

        – “10.20.20.0 0.0.0.255” 

Templates

Once the data has been abstracted, the remaining configurations need to be templated. Templates are written in Jinja2 format and contain basic logic operators such as “if” statements and “for” loops. Using some of the examples above, the matching templates might look something like this:

Global_network_configuration.j2

hostname {{ host_defaults.hostname }}

{%  for host_vrf in host_vrfs|natural_sort %}
{%      if host_vrf == “global” %}
{%        else %}
vrf definition {{ host_vrf }}
vnet tag {{ host_vrfs[host_vrf].tag }}
  address-family ipv4
  exit-address-family
{%      endif %}
{%  endfor %}

The results at runtime are a compiled, intent-based configuration that can be automatically pushed to the device via Ansible. Transitioning to data models and templates enables several benefits:

  • Enforce corporate standards and best practices at scale
  • Ensure security features are being deployed
  • Eliminate human error
  • Automate the entire device configuration from intent
  • Transition from complex device configurations to human-readable data models
  • Create a single source of truth, the Git repository, representing a known working network state

Full configuration management

After completing the data models for each device and templating the desired configuration commands, I achieved full automated configuration management. Every line of configuration was being generated automatically from my intent, represented as a Git-based TFS repository with full version and source control.

Idempotency

One of the main benefits of Ansible is idempotency. Idempotency means that a playbook can be executed once, twice, a million times, and always produce the same results. In terms of automation, to say a playbook is idempotent means that the intent-based configuration matches the running-configuration. When there is a discrepancy between the intent and the running-configuration the playbook will attempt to push the changes (differences) to the running-configuration. At the operational level, this means that each playbook will run with the exact same results every time. It also means I can compare my generated intent-based configurations with live configurations and figure out if they match. This in turn means that Ansible will only push changes to the configuration if there are changes in intent. The master branch in our Git-based TFS repository truly reflects the state of the network’s configuration at any given point in time.

Now when there is a bug to be fixed, a change required, or a new feature to release, a new working branch is created in Git using TFS. Either the data model or the template is developed, tested, and merged into the master branch via a pull request, and then automatically deployed at scale.

What’s next?

In a way, this blog series has made automation seem like a breeze to implement.  It isn’t.  The journey I outlined here – from no automation to a fully automated intent-based programmatic network – took approximately two years.

Now that the foundational network is automated, I am putting the focus of my next two years towards moving up the stack and automating even more critical services and functions. Layers 1 – 3 are now under control, but what above services at Layers 4 – 7? My goal is to use this newfound toolkit and methodologies on load balancers, firewalls, and ultimately application layer services like DNS.

The critical nature of DNS means automation would provide even greater value by eliminating risk while ensuring quality and agility at the upper-most, public facing layer of the network. Looking forward, the ability to re-use the same automation methodologies (source control via Git, repository in GitHub / TFS, using VS code to develop DNS) would add great value to the organization eliminating repetitive, error-prone tasks. The development of a continuous integration / continuous delivery (CI/CD) pipeline, automatically handling all layers of the network from the underlying network to the top-of-stack DNS entries could harmonize a currently disjointed process.

Shifting the focus to an application-centric view of the network starts with DNS and the ability to include DNS automation in a well-established, intent-based, end-to-end service provisioning playbook would be the pinnacle of network automation. This is where I am heading next in my journey.

I want to thank you for joining me on my journey. Should my next two years be as revolutionary as my past two years I may return with more posts. But for now – best of luck on your automation journey.


Published in:


An avatar of the author

John Capobianco is the Senior IT Planner and Integrator for the House of Commons, Parliament of Canada. He is a 20-year IT professional who has fallen in love with automation and infrastructure as code. John maintains his CCNA, 2x CCNP, 5x Cisco Specialist, and Microsoft Certified ITP: Enterprise Administrator while continuously developing his programming and automation skills. He authors books and an automation-themed blog, automateyournetwork.ca. Find him on Twitter @john_capobianco or LinkedIn /john-capobianco-644a1515.

Related content

Get fast, resilient, and flexible DDI management with Integrity 9.6

With Integrity 9.6, network admins can get support for new DNS record types, architect and configure multi-primary DNS, and automate IP assignments.

Read more

Deepen your security insight with Infrastructure Assurance 8.3

BlueCat Infrastructure Assurance 8.3, with an enhanced analytics dashboard, including interactive widgets and top 10 alerts, is now available.

Read more

Security, automation, cloud integration keys to DDI solution success

Only 40% of enterprises believe they are fully successful with their DDI solution. Learn how to find greater success with new research from EMA and BlueCat.

Read more

Our commitment to Micetro customers and product investment

From CEO Stephen Devito, a word on BlueCat’s ongoing commitment to supporting Micetro customers and Micetro’s evolution as a network management tool.

Read more

Seven reasons to rethink firewall monitoring and boost automation 

With BlueCat Infrastructure Assurance, you can better protect your network with automated alerts and suggested remedies for hidden issues in your firewalls.

Read more

Five ways to avert issues with BlueCat Infrastructure Assurance

By flagging and notifying you of hidden issues before they cause damage, you can go from reactive to proactive in your Integrity DDI environment.

Read more