The Truth About Micro-Segmentation: Healthy Heterogeneity (Part 3)

Civilization is a progress from indefinite, incoherent homogeneity toward a definite, coherent heterogeneity.”  ― Herbert Spencer

In my prior two posts, I covered the proverbial “bait and switch” of macrosegmentation for microsegmentation and the differences between visibility and application dependency mapping in security. In this concluding post, I want focus on the dangers of “monoculture” in security and computing operations.

Today’s data center and cloud environments have hatched more healthy innovation than we can remember in a generation.  For computing formats, we are in a post-virtualization era, where containers and “serverless” formats are rapidly gaining ground and forced to run alongside legacy approaches.  The maturation of the API economy has resulted in applications that are more dynamic and distributed, spanning multiple data centers and clouds.  To wit, in the past year, I have seen single distributed applications that run within and across over 35 data centers.

So why would anyone choose a data center & cloud microsegmentation solution that are boat-anchored to those fixtures of the client server era, the network and the hypervisor?

Today, there are different types of segmentation architectures: network centric, hypervisor centers, or distributed (e.g., host-centric)

Network Segmentation Architecture

Let’s take a look at each one and review the puts and takes of each approach.

In the Network

Segmentation, whether it is performed in a switch of a firewall, was designed during the era of static workloads and excels in “North/South” traffic flows where “big iron” plays an important role from a throughput or lookup consideration.  Hi capacity firewalls are terrific at filtering inbound traffic by providing granular flow analysis and provide useful for clustered storage and aging legacy computing platforms such as the IBM AS400.

The challenges of this model are reliance on proprietary and operational complex hardware, where replacing hardware can prove daunting from a cost or availability perspective.  Hardware solutions do not translate to cloud environments such as Amazon Web Services or Microsoft Azure and the “virtualized” version of hardware solutions – i.e., running on a virtual machine – have serious throughput limitations and create fragility through service chaining or traffic steering challenges.  When can service-chaining be a bullet-proof approach.

Data centers increasingly must be optimized for lateral traffic (approximately 80% of all DC traffic) as speed and agility become the most important drivers for IT and Security teams.  In the new IT economy, applications are the new profit centers while infrastructure remains a cost center.

In the Hypervisor

Segmentation in the hypervisor was designed to filer traffic through hypervisor-attached firewalls or increasing network-virtualization such as VMware NSX.  Each hypervisor has visibility into traffic flows and can enforce security policies locally.

The benefits of such an approach range from visibility into overlay software-defined networks and prevention of policy traffic before it hits the physical network.  In homogenous environments, hypervisor-centric segmentation can provide programmable APIs that eliminate some of the manual work associated with firewall management.

The limitations however are well understood:

• Poor support for legacy servers, NAS, bare-metal, containerized or public cloud workloads

• Limited support for heterogeneous virtualization environments

• No knowledge of processes or services initiating traffic

• Adds additional Hypervisor overhead

• Potential scale limitations, including Application Dependency Mapping 

Distributed Segmentation

The newest form of microsegmentation is derived from an overlay approach that is decoupled from the infrastructure yet takes advantage of packer filtering in the operating system (e.g., Windows Filtering Platform, iptables or Berkeley Packet Filters in Linux) or other devices (layer 4 firewall in load balancers, ACLs in network switches).  This approach helps security professionals to craft policy centrally and distribute enforcement for scale.

The benefits of this approach include: 

• Complete application visibility, regardless of underlying infrastructure

• Insight into processes and services establishing connections

• Bare-metal, VMs & containers; On-prem and/or in the cloud

• Stops out of policy traffic before it hits the physical network

• Integration into heterogeneous environments

• Programmable APIs

Limitations include lack of need in smaller single vendor customers with 100% virtualized workload and perceived concerns with agents installed for telemetry collection. 

If your world is increasingly distributed, dynamic, heterogeneous and hybrid, the architectural choice is clear. 

view counter

Alan S. Cohen is chief commercial officer and a board member at Illumio. He leads Illumio’s go-to-market strategy and customer engagement life cycle organizations, including marketing, support, talent and IT. He is a 25-year technology veteran known for company building and new-market-creation experience. Alan’s prior two companies, Airespace (acquired by Cisco) and Nicira (acquired by VMware), were the market leaders in centralized WLANs and network virtualization, respectively. He also is an advisor to several security companies, including Netskope and Vera.

Previous Columns by Alan Cohen:

Tags: