Today’s business drivers are demanding continuous application availability for not only office staff but also the growing telecommute community. To meet these challenges, we are seeing more IT chiefs moving away from a primary data center and backup disaster recovery (DR) data center to an active-active data center architecture.
The active-active data center has the ability to allow for continuous business continuity during maintenance and migrations, along with the ability to load balance business communications. In addition, it instills confidence that a failover will work should there be a disaster.
The best way for organizations to take advantage of the idle resources at the DR center is to add application mobility in a phased approach. The first phase would be to move one or two applications to the alternate data center. The most elegant technique to achieve application mobility is through an IP-portability with virtual LAN or VLAN extension.
This allows applications to keep existing network and security policies as you perform stateful hot migration from one location to another, resulting in zero interruption between the client and the application. This architectural approach builds the foundation for more mobility while allowing the enterprise to become highly available with less complexity.
The active-active architecture is a win-win and solves some of the most complex challenges for IT staff. As with all infrastructure changes, if there are no business drivers it is difficult to get funding. But in this case, it’s easy to list common business drivers as to why the technology is something that you should consider.
Common Business Drivers
- Changing work culture i.e. 24x7x365, means business applications need continuous availability
- Ongoing budgetary expenses to keep the disaster recovery (DR) data center ready for business continuity while resources are rarely utilized. For example expensive WAN links lightly utilized, servers and networking gear sitting idle.
- Difficulty for IT staff to schedule routine upgrades or emergency maintenance for the critical business servers or key networking equipment int he data center
What is needed for host mobility?
The goal is to enable the data center and DR data center to be viewed as one data center to all clients while providing the ability to have the application reside in either data center without changing its identity. There are four parts to make this successful:
Of these four parts, path optimization is the area where we have the made the most recent advances. The other parts of data center mobility including stateful devices, SANs and L2 extensions have very good solutions that will be discussed in future posts. I would like to introduce you to path optimization as the part that completes the puzzle and enables us to hit the business drivers that we have had for years.
Deciding on the Mobility Option
There are a few options when designing for mobility, IP-portability or DNS redirects. These solutions insure that clients have the correct path to reach the active application. Incorrect designs cause a number of problems such as asymmetrical-routing or one-way flows through stateful devices. These are serious problems that lead to intermittent issues which are very difficult to troubleshoot and find resolution.
With the DNS redirects approach for mobility, there are more dependencies than you will find in an IP-portability solution. These dependences equate to a more complex solution to manage. Using DNS redirects you to create two networks with unique IP addressing – one for each data center. Therefore, you have doubled your management infrastructure for applications and limited some capabilities. I’ve listed three common issues that you may encounter in this approach. These are just a few of the issues you might have to deal with when using DNS redirects for mobility.
First: It is very common for enterprises to have custom applications, with IP dependencies written deep in the code, making it difficult to change the IP addressing. Also, some applications and databases have licenses tied to an IP address. If you do change the IP address on the server when moving – requesting an alternate IP address from the manufacturer that is tied to the database license – it could take days. In addition, there could be extra costs.
Second: We need to ensure that the DNS redirects are propagated correctly throughout the DNS hierarchy. When we move the application to the alternate data center, we create a new DNS entry so clients can reach the alternate data center where the application has moved. The remote clients in your enterprise will need to receive the new DNS update which will have to pass through the DNS hierarchy. This hierarchy may be within your control or may not and you will need to have this DNS update propagated throughout the Internet.
Third: The most common issue we see with DNS updates is that some operating systems store DNS entries locally. Therefore we need to manually clear the local cache to get the update. Without the update, clientswill never reach the application once it has moved.
As with all networking solutions, there are many ways to reach the same results. But the ones with the least amount of complexity are the most successful, easiest to manage and less expensive. When possible, I always look for an approach where there is the least amount of areas that can be affected. This eliminates a number of issues and is inherently easier to manage. Utilizing IP-portability is an approach to application mobility where you have less dependence. This also allows you the ability to build the application once and move it as a system without changing its identity, eliminating many of the dependencies that we find within the enterprise.
Enterprises that have deployed a DR data center to back up their primary enterprise data center, in the event of a disaster, have taken the first step in mobility by identifying the critical business application and its dependencies. This is a very important step because the complexities of today’s business applications are no longer just server-to-client, commonly referred to as north-to-south flows.
Today’s data center emerging traffic patterns are more commonly seen with federated applications and server virtualization initiatives. These traffic patterns are more seen as east-to-west, or server-to-server flows within the data center. This is in reference to the two- or three-tiered server architecture that has allowed us to scale our applications to meet growing business requirements. These newly deployed applications have a lot more traffic within the data center. Therefore, when moving an application to the alternate data center, we need to account for all the dependencies.
Once the application had been identified, you can decide if your business requires no interruption as you move hosts from one data center to another. This requirement is limited to virtual hosts where we can move the application using VMware vMotion between data centers.
Today there is 10ms latency maximum with VMware 5.5 between data centers. Therefore, be sure to check your hypervisor requirement. There is also a bandwidth requirement depending on the size of the virtual machine (VM). The other factor is your data – the storage device that houses the business data. This data must be synchronized and there are some applications that can help achieve this requirement.
Here you are also tied to a latency requirement. As you can see, the active-active data center, where the client has no interruption, is limited to location (latency between data centers) and the amount of bandwidth. If you have less than 5ms round-trip latency and one gigabyte of bandwidth, you are an excellent candidate to achieve this high level of business availability.
Enterprises that have regional diversity normally fall beyond the maximum latency requirement for no interruption to the clients, as you move the active application from one data center to another. Host mobility still adds business value with increased availability and you can load-balance your workload between data centers.
The resources at the DR data center are being utilized and you know they will work if you encounter an epic event that could bring your business to a stop. Even if you don’t meet the minimum latency requirement, this is an excellent solution.. Use host mobility, where you shut down the server in one location and make it available in the other location. Keeping the identity of the application means your clients will have very little interruption.
In my next post, I’ll focus on IP-portability. I’ll take a deep-dive into the technology helping us to understand a new routing protocol called Location Separation ID Protocol (LISP) and the option deployment we have with this protocol.
For more on data center solutions, check out CDW.com/Datacenter