Owning the Data Plane PART 1
Back in the “olden days” (pre-Cloud era), networking was physically connected together and the visibility and troubleshooting tools involved in diagnosing/repairing network issues like cabling, packet loss, routing were done with packet captures, CLI commands and maybe a handy console cable (with appropriate drivers installed!). The infrastructure you managed may have even had a few application tools to help with troubleshooting these problems, too. We now understand how limited this approach can be with respect to scaling and how slow it actually was to support and remediate.
In modern times, the Cloud has provided a solution to many problems, though at the cost of some critical functions like end-to-end visibility, security and architecture across the Clouds and onprem landscape. For instance, the concept of a data plane, a path in the CSP fabric, that packets ride along the ethereal cloud wire is no longer an obvious or intelligible path with mechanisms that are visible, much less accessible. These constructs that handle the transport of data from one virtual NIC to another behave in a different, unfamiliar way. First and foremost, the OSI model in the CSP world is primarily L3+, with L2 only coming into play when connecting via onramps like Equinix, Megaport or cloud direct services like ExpressRoute or DirectConnect. The narrowed view or overlay of the network from the CSP’s perspective is “simplified” and abstracted with common visibility removed.
Secondly, the routing mechanisms in cloud are also abstracted in that routes are input with a next hop of an interface or ENI (AWS) instead of an IP address; Azure leverages routes in a more traditional manner and has an IP as a next hop, however, in each case, the controls on the propagation of routes (especially when you are creating L3 or specific network boundaries in your network) are not obvious or intelligible, much less integrated with your security policy.
Now, I don’t like to date myself with these still “tried and true” legacy protocols and terminologies but, in the Cloud, the game has changed. The CSPs (AWS, Azure, GCP, the list goes on) maintain a XaaS model and the bygone physical world constructs like cabling, spanning tree, LACP (you get the point) have disappeared and, functions like routing, security and troubleshooting seem to have been replaced, substituted and, in the process, shrouded by this new software-defined infrastructure. Some would call this the “black box” in the cloud but it has no reference to what we would like it to mean as a reference for a flight system or navigation record machine; instead, it is more like a black hole and the transit of packets rides the path over 1st party CSP constructs, with hidden, CSP magic being performed on it to get it from one workload to another.
Some CSPs may claim they have visibility into the path but, in reality, their API and underlays are more of an event horizon; you have to get very close to the packet, or in the case of Azure, the NIC, in order to understand the routing behavior, albeit limited, with a super helpful “effective route” as your breadcrumb, your canary . . . I digress. The behavior of the path selection or transit mechanism up close (effective route at NIC) or at a distance (VNET route table) is based on an order of operations ruleset but, under the covers, the fabric and cloud mechanism is subject to further, non-published rules.
In the cloud we need to be dynamic and having a static ruleset with static routes provided on an invisible machine is not as futuristic and capable as I had imagined the cloud to be. Even the macroscopic sense; there is not even a sign or, if I may interject yet another alliterative Seussian-esque reference, another Hawking radiation revealing routing rate reality! (Say that 5 times fast!)
To compound this problem, the time to provide these effective route “insights” is also delayed as it requires additional resources to gather the routes; querying these resources is time consuming, like your query is being literally run by overfed hamster on a broken, taped up wheel but I digress; the process is slow and, in the heat of battle or, troubleshooting, it is not ideal and leaves you with a limited, narrow understanding of the path and meaningful metrics like netflow, packet loss, utilization, ECMP, BGP detail or some type of description or path of the packet as it flies through the enigmatic pseudowire.
I think you get my point — Visibility and troubleshooting exists in the cloud natively but I wouldn’t rely on it solely for its limited and inefficient view of the network. In order to tap into the black box and capture meaningful, actionable detail, you need to own the data plane. Having an infrastructure in place to support and unlock that visibility is key, not only from a functional, operational point of view but, as some would agree, more importantly from a security vantage point. If only there was a platform that had these types of best-practice, well-architected frameworks to provide for this . . .
Enter Aviatrix and its Network Security Platform.
Aviatrix is built with a new architecture, specifically the MCNA (Multi-Cloud Network Architecture), and, through the deployment of its software gateways, gives the ownership and control of the data plane back to the customer. Full visibility is now possible.
Threat Intelligence and IPS/IDS is available at the VPC/VNET level with ThreatIQ/ThreatGuard.
Intelligent orchestration of cloud routes and securityby a centralized controller.
Historical data and routing behavior (dynamic/static/propagated) is now available as Topology Replay in CoPilot aka Cloud DVR.
Automatic steering of traffic through Next-Generation-Firewalls (NGFW) via FireNet.
Efficient troubleshooting via application path testing (AppIQ), source/destination testing (FlightPath).
More importantly, intelligence and security is built into each instance, and, in turn, transforming the native cloud network connectivity into a secure, uniform, accessible and efficient data path.
You can truly break free from the restrictions and limitations in the CSP, within a footprint as small as a single VPC/VNET, scaling all the way up to a Multi-region, multi-cloud transit network, you own the path across your network, across your cloud, to and from your data centers and everywhere in between. The process of adding this into your existing network is easy, straightforward and repeatable. Also, all connections built on this new architecture are encrypted by default — more on this in Part 2.
In closing and, as Morgan Freeman once said, you can try to use cloud native or out-of-the-box CSP tools but I wouldn’t recommend it.