Businesses of all sizes are embracing public Infrastructure as a Service (IaaS) solutions to improve efficiencies, drive innovation and increase responsiveness to competitors and the market. In fact, analyst firm Gartner predicts adoption of public IaaS services will grow “38.6% in 2017 and reach $34.6 billion.” Despite the popularity and numerous benefits of public IaaS, there are some rather unique challenges organizations face when moving resources to the cloud.
As public cloud adoption rises, more and more data is driven beyond traditional IT security protections — into data center environments no longer owned, managed or controlled by the enterprise IT team. At the same time, customer assets and applications in the cloud are at risk from the same types of threats targeting their corporate data centers. An unpatched Linux or Microsoft server connected to the internet is vulnerable to the same threats whether on a customer network or in the public cloud. And malware introduced into the cloud can easily propagate among virtual machines (VMs), attack virtual segments or even ride unimpeded over VPN links back to corporate networks.
Shared Security Responsibility
To fully embrace the cloud, businesses need to understand where the balance of responsibilities lie between protecting the cloud infrastructure (incumbent upon the cloud provider) and protecting the data that resides in the cloud (incumbent upon the customer). This is what IaaS providers refer to as the shared responsibility model.
Public cloud networks are built upon a multitenant platform supporting millions of simultaneous customers worldwide. Foundational to public cloud environments are enhanced security, operational management and threat mitigation practices that protect the infrastructure, cloud fabric, hypervisors, services and tenant environments.
While public cloud providers deliver strong security controls to protect the cloud fabric, they have no knowledge of “normal” customer traffic. Instead, they offer basic port filters or access control lists (ACLs) to allow customers to segment their cloud environment as well as control both inbound and outbound traffic.
Cloud’s Unique Vulnerabilities
This presents a big challenge to cloud architects and security administrators who are required to provide a consistent level of protections for assets in the cloud the same as they enjoy on-premises. Confused by the shared responsibility concept, many organizations opt to utilize the built-in access control policies native to their cloud service provider (CSP).
While these services allow customers to open specific ports for communication with the internet (port 80 and 443, typically), there is no mechanism in place to block unwanted traffic or prevent threats, either known or unknown, from attacking customer assets. The bad guys are all too aware of this and have been diligently building automated tools to identify and attack vulnerable assets in the cloud.
In their recently published Bad Bot Report, Distil Networks noted unauthorized vulnerability scans were detected on 88 percent of all websites. Cybercriminals are conducting attacks using automated tools based on bot armies to target unsuspecting websites. They scan ranges of IP addresses of public cloud providers to find new servers and services coming online. The goal of these scans is not only to identify the connected host(s), but to see what else they are connected to in order to get a clear picture of the overall network environment and any higher value targets contained within. The scans are made to look like legitimate user activity and run over common ports (80 and 443, among others), so standard port-blocking techniques can’t prevent these scans.
Worse still, bad bot traffic accounted for about 20 percent of all website traffic in 2016, according to the same report. In addition to scanning and gathering intelligence on customer environments, these automated nefarious agents are also quite useful for a variety of other activities, including; “web scraping, competitive data mining, personal and financial data harvesting, brute force login and man-in-the-middle attacks, digital ad fraud, spam, transaction fraud, and more.”
New services are particularly attractive targets because there is a time lag between when the service is deployed and when patches are installed, leaving a nice window of opportunity for motivated hackers. Additionally, since new web services are often deployed by non-security professionals (DevOps, cloud architects, etc.), best practices techniques like using strong passwords often aren’t leveraged, leaving these sites ripe for stolen and misused credentials, along with a host of other attacks.
Making Security as Flexible and Dynamic as the Cloud
A defense-in-depth strategy for the cloud must include the ability to protect the network along with all workloads and data from exploits, malware and other sophisticated attacks. At the same time, the way we look at and deploy IT security has to change.
Security has to evolve from a static and manually intensive discipline to become flexible and dynamic, seamlessly supporting the new functionalities and features of cloud-based environments. Automation and orchestration are now critical for aligning security with the ever-changing needs of businesses today, while also providing for the ability to imbed security into the overall process of defining, deploying and expanding cloud services.
IT security can no longer remain monolithic and rigid; it has to be modified and better aligned with the rapid transformation businesses are undertaking as they transition to cloud based networks. At the same time, organizations should look for solutions that provide comprehensive threat prevention security, access, identity, strong authentication, compliance reporting and multi-cloud connectivity to help them embrace the cloud with confidence.
This blog post brought to you by: