For a long time data center architects have struggled with the requirement to provide connectivity across racks of servers, across multiple locations, while maintaining isolation for groups of those servers.  For example, the Accounting department might not want their servers accessible to someone working on another department’s server.  The Accounting servers need to be isolated, that is, securely separate from the other servers in the data center. 

The common solution to this for years has been to use a switch functionality called VLANs or Virtual Local Area Networks.  Creating VLANs is a switch level configuration. If static, VLAN configuration requires each port on the switch to be assigned to the appropriate VLAN, based on the “grouping” of servers.

For example, the Accounting servers might all be in VLAN 100, and the switch ports that the servers connect to would each be configured as members of the VLAN 100 group.  If there are servers in separate locations, as in the graphic below, then the switch those servers connect to needs to be configured for VLAN 100 and for VLAN trunking.  Adding servers, moving equipment, replacing or upgrading switches all create lots of work in this kind of infrastructure.


Within the past few years, as virtualization has become mainstream, many vendors have begun to provide an alternative to VLANs for virtual servers as an isolation solution.  The alternative is Software Defined Networking (SDN).  With SDN the administrator can define an IP subnet logically, and then as servers are added, they are added to the virtual IP subnet, taking on an IP address in that subnet range.

Most virtualization vendors and most cloud providers make this available as part of the services they provide.  So, in the earlier example, Accounting wanted their servers isolated and secured from potential intrusion.  In an SDN environment, we would first create an Accounting virtual subnet, say, and then add the Accounting department servers to this subnet.  Because of the way in which software defined networks operate, this subnet is an isolated, non-routed subnet, and the servers on this subnet are isolated from all other servers in the infrastructure.


Gone are the complexities of switch configuration for VLANs, and the ugliness of VLAN trunking or adding servers or switch upgrades or replacements.  Now adding a server to the Accounting virtual network is as simple as deploying it to the Accounting subnet.  Done.

These three virtual networks could all exist within a server virtualization farm, or in a public cloud infrastructure.  The virtual networks are logical networks.  Behind all of this we still have physical switches, routers and gateways connecting the host servers.  So, how is this isolation achieved?  How is it that I can access a server on the Marketing virtual network and may not be able to even see the Accounting servers, even though they are all on the same physical infrastructure??

And, what if we want some routing capability?  What is the need of the Accounting servers to communicate with, say, the Marketing servers?  What if we want to place the Accounting servers locally, while the Marketing servers we want to place in a public cloud?  And if these servers are so isolated, how do we manage them?  Stay tuned for the next discussion on how these virtual networks really work, and a later discussion on extending these networks, or connecting them to other networks, or even connecting on premises and cloud networks!