Data centers have changed quite a bit since pizza boxes reigned supreme. We replaced those servers with virtualization, reducing our physical footprint and utilizing more of the CPUs and memory. Next came the idea of fully tested systems in the form of converged infrastructure: making sure different components of compute, storage and networking worked well together. This all leads us to today’s leading technologies: the software-defined data center and hyperconverged infrastructure.
With HCI, compute, storage and virtual networking become software defined: easily changeable and upgradable. More importantly, HCI gives us the ability to easily scale out resources, adding more capacity when needed, not having to include as much overhead at the initial time of acquisition. Deploying and managing these data center systems is easier than it has ever been before, but this is only a base: the components needed for what is being asked of data centers in satisfying today’s smart and connected businesses.
The public cloud has shown what can be possible in speeding the time to delivery of development, processing and analytics. To enable that functionality into a private cloud, we use these advances as a foundation on which we build more advanced functionality.
Standardized components such as HCI are an enabler for automation. We want to be able to go to a self-service portal and request resources to use. That invokes a cloud management platform that provisions virtual machines (VMs), configures the software for it to be used and delivers it to us in minutes, not weeks.
Supporting a Multicloud Environment
We can use traditional data centers as another spoke in the wheel of a multicloud strategy. The choice should become seamless with cost modeling and business logic that defines when I can place my work without additional approvals, or when I need someone to review my request. The entire ecosystem should allow us to chargeback based on a user’s business unit, or at least show back the cost of what your consuming.