Cloud Computing: Achieving Control in the Hybrid Cloud

Learn how to control applications and services dynamically deployed from a variety of cloud environments into devices like user desktops, Microsoft Terminal Services and a range of mobile devices.

Dan Griffin and Tom Jones

If you believe even a fraction of the current press about cloud computing, you want to understand not only how to harness its benefits for your own organization, but how do to that without losing control of information which is critical to the success of the organization.

Most early cloud implementations will be hybrids of services that run on-premise with services deployed in the cloud. So that is the scenario that we’ll explore so that you can begin a gradual move into the cloud without undue fear of loss of control.  You will learn how to control applications and services dynamically deployed from a variety of cloud environments into devices like user desktops, Microsoft Terminal Services and a range of mobile devices.

The goal is to evolve from control of computers to control of services available to users. This takes existing Service-Oriented Architecture (SOA) and machine virtualization to the next step. The result will be increased business productivity with less overhead, as the user will be able to work anywhere on any capable device without any worry about application deployment.

Two Ways to Look at Cloud Services

Let’s just accept as given that your organization will soon have some cloud exposure, so it’s important to plan the transition of some applications into the cloud. There are at least two distinct axes on which a cloud service can be measured.

Organization Size Authentication Collaboration Typical Cloud Use
Small organization, up to 25 users Workgroup Individual Applications like email
Medium organization up to 250 users Domain Federation Platform, database, ERP
Enterprise class organization Multiple domains Dedicated Load and location leveling

Axis 1 Delivering services to three broad sizes of customer organization

The above are generalizations of the ways that organizations of different sizes approach the web. Small organizations and small departments within large organizations typically pick a complete software solution like email or office productivity applications. These solutions are isolated from the organization’s other IT resources and offer fixed management control experiences that are a part of the total solution.

Once an organization gets to the size where they can fund a full time IT administrator they will look at more personalized cloud solutions where they have more control over the application. New projects can start completely in the cloud when they have no important dependencies on existing business data. Projects which involve extensions to existing on-premise applications will have greater dependency on on-premise servers and hence are more difficult to move into the cloud. Larger enterprises will take on even more flexibility by renting virtual images on machine managed by external or internal cloud providers.

In summary, while the small organization has little need or capability to manage their cloud deployments, each step beyond that will require management tools to control both on-premise and cloud resources, something that has been very difficult to accomplish up until now.

Offering Type Industry Microsoft
SaaS: Application Software as a Service Fixed apps like email, ERP, CRM Salesforce Office 365, CRM Online, Windows InTune
PaaS: Platform as a Service Controlled space for apps or Database Google, Facebook Windows Azure, SQL Azure, Azure App Fabric
IaaS: Infrastructure as a Service Complete access to dynamic VMs Amazon AWS, Private Clouds Windows Server Hyper-V

Axis 2 Delivering services at three broad depths of engagement (Service Models)

This axis is focused on the vendor product solutions rather than the customer needs like the first axis. Vendors must commoditize the service offering to keep costs low, so it is incumbent on the customer to assure that the service level agreement (SLA) from the vendor will meet their performance expectations. At each step in the service offering, from software to platform and infrastructure, the customer assumes more and more of the management load in exchange for more and more flexibility in the service offering.

What’s Your Motivation?

It is interesting to look at which companies are moving first to the cloud. The answer can be summed up in these two categories:

  • Who has the most to gain? Any organization that is planning a switch away from an old or expensive platform will be highly incented by the reduced capital expenditure of the cloud. Any organization that has rapid growth or highly variable compute loads will find the flexibility of the cloud to scale both up and down to be a real cost advantage.
  • Who has the least to lose? Generally the larger, more diverse enterprises have the most to lose and are the least likely to move, absent some major shift in their business model (like a divestiture). The smaller the company the smaller the existing investment and the less concern about moving to the cloud. According to research by McKinsey, SMBs with fewer than 250 employees are more than twice as likely as big enterprises to adopt cloud services. The evidence from BPOS deployments bears this out.

Management Control as a Service

A big problem created by hybrid clouds is that there is a duplicated set of IT controls: one on-premise in the enterprise Active Directory, and one in the Cloud under a different name-space. That separation means that there is a synchronization problem between these broad deployment areas. All of these services need to be controlled by the customer.

That control function is sometimes described as a separate service which could also be run on-premise or in the cloud, but really it is just another application. Whether the control application is supplied by SAAS, PAAS, IAAS or on-premise is not as important as determining a control architecture and solution management software that will work for the entire organization in a hybrid environment. After a solution is chosen, the location where that solution runs will be easier to determine.

One case where management has been deployed as a service is Microsoft InTune. For a low monthly fee for each PC, it adds cloud management of the PC together with an upgrade subscription. Using the same anti-malware client code as Microsoft Security Essentials (MSE) and Forefront Endpoint Protection (FEP) it provides small to medium organizations with robust anti-malware protection to any location accessible to the Internet.

For hybrid clouds, it’s important to note that management is a service that needs to be hosted somewhere with collectors in all of the locations where the cloud infrastructure is deployed.

Until recently management of the cloud meant management of the virtual machines that are the core infrastructure for the cloud. An example is System Center Virtual Machine Manager (SCVMM) which provides for centralized management of physical and virtual on-premise servers. But as more organizations deploy hybrid environments, the inability of existing tools to manage two disparate and often incompatible domains has become clear. Figure 1 shows how the release of System Center 2012 suite will address cloud management issues before the end of 2011.

Figure 1 Managing multiple cloud sites

Figure 1 Managing multiple cloud sites

While SCVMM will be upgraded and continue to support individual cloud deployments, a new System Center product called App Controller (code name was Concero) will offer control of multiple cloud deployments. App Controller will allow a single pane of glass view into the Private Cloud  (System Center 2012 and Hyper-V) and Windows Azure (Public Cloud). For example, Citrix has already announced that XEN virtual machines will be included in App Controller soon after RTM.

This continues a process where the cloud provider shields the IT department from all concern about deploying and maintaining the hardware of servers and routers. The cloud infrastructure as a service (IaaS) can be assumed to exist, and IT staff should focus on providing applications and services to the organization.

User-Oriented Architecture

While the next generation of management tools is responding to the hybrid cloud trend, another important trend to address is the so-called bring –your-own-device (BYOD) trend – that is, users accessing corporate resources with personal hardware.

Again, the major management solutions are responding. For example, user-centric application delivery is enabled with SCCM 2012 and focuses on providing extreme mobility of a specific resource for a specific user. IT focus is moving from work-life balance to work-life integration. Users expected to be able to work anywhere, anytime so the solutions and their architectures need to be designed with that as a primary goal.

The SLA guarantees on up-time needed to support this must be at least 99.9 percent with better guarantees possible as the technology progresses. For on-premise computing the focus has been on insulating the service offerings from cloud outages, but as users increasingly are moving off-premise, there are fewer services that must operate when the Internet is unavailable.

It has been possible since Windows 2000 to offer fine-grained policy and application deployment, specified  to the level of the individual user, but that capability has not been widely used because of licensing and device limitations. Device affinity is now available in SCCM 2012. If the user is not on one of their “primary devices” then the strategy is to provide them some other type of access like Remote Desktop Services or Office 365.

Since not all devices have the same storage or other resources, application deployment options need to understand device capabilities. If the IT infrastructure is to provide user access to any document, anywhere, it needs to adapt to the device. For example, Windows Phone 7 comes with a version of Microsoft Office. So if the user is accessing a Word document from a smart phone, in some cases the Word document can be downloaded to the smart phone, but in other devices the document will need to be rendered on a web or terminal server to allow remote access to the document.

This is a different way of thinking about application deployment in which the industry is moving from managing the workstation to adapting to users and their devices. The implication is that commercially successful applications will need to be available for all common deployment mechanisms in order for management software to succeed in providing anywhere access.

Authentication of the Cloud Servers

Security breaches have taught us to ensure that both ends of any connection are well known. Microsoft Terminal Services added TLS (aka Schannel) protection to ensure that users were not spoofed by rogue servers masquerading as official sites, or creating a man-in-the-middle attack. These types of attacks are prevented by authentication of both ends of any connection.

Virtual private networks (VPN) have long been used as a gateway for securely accessing organizational assets. With Direct Access (DA) available on Windows 7 there is now a way for local and remote computers to use IPSec to verify that each computer is part of the same security domain, as well as to provide “always on” management capability.

If users are not part of the same security domain, the best available solution is TLS with certificates issued to all machines (or users) that need to trust each other. Today, web servers can acquire enhanced validation (EV) certificates to use in TLS connections. Users can additionally be authenticated by smart cards as a part of TLS mutual authentication. However, the difficulty of deploying certificates and private keys to end users has caused most user authentication to fall back to user names and passwords for authentication.

Control of Data in the Cloud

There are two aspects to the control of data in the cloud. The first is whether the data should be placed in the cloud at all. If the data is moved to the cloud, the second control aspect is how to prevent it from leaving the cloud. The physical location of the data may be important when corporate or governmental compliance is a factor. The management control programs will need to be able track compliance with policy restrictions. Government, risk and compliance (GRC) papers and solutions have been collected by Microsoft at their team blog site.

As we observed above, it is critical that IT take a proactive approach to setting policies for corporate data and cloud computing, otherwise some other business group at your organization may effectively make the decision for you. As one example of data creep, the need for data access authorization in the cloud carries with it the idea that some authorization claim will need to be presented to those cloud servers. Next we will consider the authentication of users and then the authorization of their access to cloud resources.

Single Sign On

The advantage of having a single point of control over users’ access to any organizational assets should be clear. Central control of user authentication credentials allows immediate and full revocation of access to all organizational assets when that user is no longer part of the organization. Central control also provides consistent policy for attributes like password strength.

Extending centralized authentication policy to cloud resources provides not only continuity of control but also maximizes user convenience so that the user need not provide different credentials based on the location of the resource. This will be particularly important as resources migrate to the cloud over time.

The history of Web SSO started in the middle 1990’s and has not resulted in a widely acceptable solution as yet. The current successes are with a federated approach using SAML which assumes that a user is already well known in one namespace and needs to establish authorized access in another. A federated ID does not provide a broad web based solution that would accommodate all of the services currently under development for cloud based services.

OAuth 2.0 is a new protocol that extends the architecture of OAuth 1.0 to allow for selective disclosure of the type needed in a world where user privacy is mandated by so many government regulations. Even though it has not reached final approval, parts of it are already supported by Microsoft and other identity solution providers. Until identity providers are more broadly deployed the only practical solution is federated identity. Stay tuned for more on web SSO in the near future.

Cloud Accessible Authentication Directory

As applications are placed in the cloud, the need for the databases to support those applications implies that access to the data will be available to the cloud. There will be little incentive to keep the database on-premise if any significant application for the database is in the cloud.

Authentication data is a good example. The authoritative source of the authentication information is typically some HR or customer database that is synchronized with one or more IT directory servers. While it customary for the directory servers to be on-premise, it is a good idea to examine the need for authentication to cloud resources. For example, if email servers are to be provisioned in the cloud, then the implication is that the address list will also be in cloud.

Once that decision is made there is no longer much of a privacy implication to putting the entire organizational directory server in the cloud as well. The cautionary result of any decision to move applications to the cloud is that database requirements will migrate to the cloud as well. Once access to the database is enabled from the cloud, there is little reason not to move the database itself to the cloud and allow on-premise application access. As an optimization, read-only copies of any directory service will support faster response and resilience in the loss of network connectivity.

Cloud Accessible Authorization

If cloud accessible authentication information is not politically acceptable, then some other method for authorizing access in the cloud needs to be enabled. When authorization claims are demanded by relying servers in the cloud, the local policy can make just-in-time decisions about which data is permitted on a case-by-case basis. The SAML and OAuth2 standards have some of that capability today, but a general solution is awaiting the completion of a protocol that will enable a business model for identity providers.

Until then a federated solution with data release policy applied to each federated organization is the only solution. The key to broader availability of selective disclosure is choosing an identity provider, so that is the decision that needs the most attention as the organization moves its resource into the cloud. The reality is that SAML has only been adopted by 10-20 percent of the market according to Forrester, Gartner and others. OpenID and HTTP Federation are also very useful in federating the widest range of existing user stores and SAAS applications.

Summary: Three Steps to Prepare

Cloud services will be part of the future of every IT department sooner or later, it is critical for the IT professional to get ahead of the coming onslaught. Three areas will need the most attention in advance of any move into the cloud.

  1. Management of the hybrid cloud is an evolving area that will need the most attention because it is currently the least understood. Try to spend some time exploring management tools like System Center App Controller.
  2. User authentication and directory services will be needed anywhere that employees or other users expect to access organization resources. Since some governments have strict rules about the locations of private data, try to determine where the authoritative data will reside and how it will be migrated to the other clouds that need it.
  3. Access control and data leak prevention will become more difficult as data migrates to the cloud. As a goal, a role-based access control mechanism base on SAML claims seems to be the best bet today for providing the required functionality.

Proactivity is the key for retaining control.

Email Dan Griffin

Dan Griffinis a software security consultant based in Seattle. He can be reached at www.jwsecure.com

 

Email Tom Jones

Tom Jonesis a software architect and author specializing in security, reliability and usability for networked solutions for financial and other critical cloud-based enterprises. His innovations in security span a full range from mandatory integrity to encrypting modems. He can be reached attom@jwsecure.com.