Cloud Security: Safely Sharing IT Solutions

You can share IT solutions between the fixed cost of local resources and the variable cost of cloud resources without losing control of access to enterprise assets. Let us show you how.

Dan Griffin and Tom Jones

When our children were young, we kept them safe at home. When they had learned to fend for themselves, we let them venture forth. It’s a similar situation with enterprise assets. We’ve traditionally protected them within the perimeter of the network. We put up firewalls to ensure those assets didn’t leave the premises.

When you tell an IT manager he can share his local computing load with on-demand cloud-based resources, the first reaction is excitement at the possible cost savings and user experience improvements. But like an overprotective parent, that excitement often turns to skepticism and anxiety about the new challenges of securing enterprise assets across multiple control points.

Data processing has already moved from the enterprise datacenter to PCs spread across the world. The next logical step is to move the enterprise data and applications from within the enterprise firewall to where they’ll be closer to the business users who need them. That means moving to the cloud.

To reap the benefits of cloud computing without the accompanying anxiety, you need to establish distributed access control to match the distributed content and distributed applications. Here, we’ll outline the steps you need to take to ensure reliability and control as your data and applications move beyond the enterprise perimeter.

Secure Your Cloud Architecture: Step-by-Step

  1. Establish service-oriented architecture (SOA) to ensure that you can safely relocate each component
  2. Centralize management of data and application deployment and updates
  3. Use federated identity management to ensure every user is known at every point in the cloud
  4. Assign roles and other attributes to each user to verify data-access claims
  5. Assign access-control rules to applications and data that can move with them to the cloud
  6. Authorize access to applications and data based on verified user-access claims

Service-Oriented Architecture

The first step in establishing an anxiety-free cloud deployment is to create a diagram showing the application and data flow. For the design to be service-oriented, each application needs to operate as a service that users can access either locally or in the cloud. Similarly for data, the location of the data should not be specified in the application. You need to be able to configure that location when you deploy the application. You can see in Figure 1 how the components of the IT environ­ment relate with the applications and data sourced on either local or cloud resources.

Figure 1 A look at the architecture of application and data flows

Figure 1 A look at the architecture of application and data flows.

Your development team sources the application executables. You can have them apply those directly from the vendor, but you can exert more control if you first bring all application code and updates into the enterprise and have it distributed from there. Data migrates from the client machine to either the enterprise or cloud data stores, which are shown as SharePoint servers in Figure 1.

When applications access data, that action is authorized by access-control mechanisms local to each data store. The integrity of the application executables and the enter­prise data needs extra consideration as it moves beyond the perimeter and into the cloud. The ideal and most flexible management situation is when you can manage local and cloud resources as a single entity that can dynamically respond to resource requests.

Accounting for the Cloud

The first step in justifying any cloud deployment is determining the return on investment. You typically classify cost as set-up or conversion that includes commissioning the new services, training, and decommissioning the old services. Return is expressed as reduced cost-per-month and the number of months to recoup the investment. More sophisticated analysis includes discounted cash flow analysis, but if the return is less than two years, that likely adds no real value to the decision process.

The real value of cloud deployments comes from intangible benefits such as improved responsiveness to fluctuations in demand for services and improved cost control. Consider these types of costs from the perspective of the IT department:

  1. Fixed costs typically come from investments in capital equipment like servers and machine rooms. The costs are typically depreciated over the lifetime of the asset. That depreciation will be charged to the income statement every month regardless of the equipment use.
  2. Variable costs depend on the amount of service provided and will include cost of goods sold and any fee charged based on cloud usage, such as short-term equipment rental based on the current load. This type of cost gives the IT department the best ability to tie costs to service delivery.
  3. Semi-variable costs typically come from services provided to full-time employees or other resources that are more difficult to scale up or scale down. Software rental or provisioning e-mail services will be in this category. The inertia behind provisioning and de-provisioning employees causes this cost to significantly lag behind changes in demand for services.

You can justify using cloud services for semi-variable costs for reasons such as off-loading payroll services to a dedicated provider. In payroll, as in e-mail, the rules change rapidly—the software needs constant updates and the expertise to perform these functions is expensive. While it’s more difficult to justify cloud provisioning based on semi-variable costs, the results can still be positive and can help IT focus on the real mission of delivering value to the enterprise products.

Four Steps to a Secure Cloud Deployment

Most IT executives think cloud computing is a way to reduce capital expenditures by using virtualization technology. Many vendors tack the word “cloud” onto any Internet service. For our purposes here, we’re using the Gartner Inc. description of how the cloud came to be so important to business: “the commoditization and standardization of technologies, in part to virtualization and the rise of service-oriented software architectures, and most importantly, to the dramatic growth in popularity of the Internet.” This is important in four specific areas:

  1. Centralized data management, using SharePoint as an example
  2. Centralized application management, using Exchange as an example
  3. Federated identity management, using Active Directory Federation Services (ADFS) as an example
  4. Additional assistance for migrating to the cloud

Centralized Data Management

In 2007, Gartner began telling security conferences it was time to abandon the hardened perimeter boundary between the enterprise and the Internet. Even at that time, experts were arguing that enterprise boundaries were already porous. Perimeters had become irrelevant to the task of keeping out intruders, so access control was required with every IT service. Security de-perimeterization is the current reality. To be truly secure, only the server that contains data can ultimately control access.

Still, it isn’t rational to manage access at every server, because many deployments contain hundreds or even thousands of servers. IT can’t really determine data rights and access rules. IT can, however, establish a role-management system with which business owners can permit or deny access relevant to business objectives.

The regulatory environment has become increasingly stringent both for data modification and data access. This requires a new paradigm: one that will allow data to migrate to whichever server is best able to service access requests, while ensuring compliance at reasonable cost. Here are some requirements to consider for data management in a cloud environment:

  • Fast access to data for which the user is authorized, and when and where it is required
  • Access not compromised by a natural or business catastrophe
  • Data discovery by legal governmental requests, assuming the enterprise can provide the data needed
  • Data Loss Prevention (DLP) is an integral part of the service offering
  • A service-oriented architecture (SOA) should enable easy data migration back and forth to the cloud
  • Identity of data must not include its physical location, so that the data can easily be moved
  • Location tags for data should be the logical country of origin, not the data’s physical location
  • Data backup and recovery operations need to be based on the data identity, not its location
  • Data-access rules can be created and maintained by the business owner of the data
  • Access permissions can be viewed by compliance auditors
  • Sensitive data can have audit controls for both modification and access
  • Separation of duties prevents the same administrator from modifying data and audit logs
  • Service Level Agreements (SLAs) need to spell out everyone’s expectations and responsibilities

Starbucks Corp. found that the cost and delay of physical (paper-based) distribution of current pricing, business analysis and news was not cost-effective. As a result, it now supports SharePoint for its network of 16,000 locations. That SharePoint site has become a business-critical communications channel through which employees can get current information, with the ability to search quickly for the information that they need when they need it.

Availability and reliability is tracked with Microsoft System Center Operations Manager (SCOM)and other analytic tools. Because SharePoint supports both internal and external network connections, the server locations can adapt to suit the current network topology without concern for local, cloud or mixed environments. This deployment has enabled Starbucks to realize the following benefits:

  • Supporting store growth and capacity needs by improving system stability with effective monitoring and reporting tools
  • Allowing store partners to work more efficiently and effectively with an intuitive portal interface and easy access to information across the enterprise
  • Maintaining data security with enhanced document management and privacy functionality
  • Aligning store priorities with company objectives by integrating trends and growth reports with partner communications

Integrity Protection

Any data store must be prevented from becoming an infection vector for viruses or spyware. Data types, like executables and compressed or encrypted files, can be blocked for a variety of integrity and compliance concerns. Microsoft employee David Tesar blogged about some of the business reasons to protect SharePoint using Forefront Protection 2010 for SharePoint, which was released in May 2010.

Data Loss Protection and Detection

To ensure full protection, data from one customer must be properly segregated from that of another. It must be stored securely when “at rest” and able to move securely from one location to another (security “in motion”). IT managers must ensure that cloud providers have systems in place to prevent data leaks or access by third parties. This should be part of an SLA. Proper separation of duties should ensure that unauthorized users can’t defeat auditing and/or monitoring—even “privileged” users at the cloud provider. Figure 2 shows the various data transitions susceptible to outside attack.

Figure 2 The relationships of data and trust transitions

Figure 2 The relationships of data and trust transitions

The new attack points against enterprise desktops and servers using the Internet or physical media include:

  1. Data transfers from the enterprise to the cloud, losing authorization information in transit
  2. Cloud accesses to cloud SharePoint services that don’t have enterprise protection
  3. Private data leakage, or data leakage of authorization information from external ID providers

As the amount of data increases, the time to filter this data or the cost to increase storage capacity can be significant. The data keyword and file-filtering available with Forefront Protection 2010 for SharePoint lets you control the type of data you allow on the SharePoint server and provide reporting on what types of files are present. This functionality can reduce costs by not requiring additional storage capacity and by helping to prevent data leaks.

For instance, if you have a publicly accessible SharePoint server in your company, you can enable keyword file-filtering to prevent anything with the words “confidential” or “internal only” inside the files. You can even specify the threshold of how many times these words show up before you disallow them from being posted.

Rights Management Services (RMS) is also an effective addition to your defense-in-depth strategy, protecting the documents themselves regardless of where they’re stored or downloaded. Most commercial applications don’t need this level of protection, but it can be helpful for some particularly sensitive documents, like financial or acquisition plans prior to public release. Since the release of Windows Server 2008, RMS is a role in Active Directory.

A full audit trail will be required for any forensic investigation, resulting in a huge amount of data. You can enable Audit Collection Services (ACS), an add-in for SCOM, on high-risk resources to pull all of the audit records as they’re generated to a central place for secure storage and analysis. This configuration will prevent attackers from tampering with the forensic data, even if the attackers have high privilege.

The “Trust” arrow in Figure 2 indicates this important flow of authentication and authorization information, explored later in this article in the section “Federated Identity Management.”

Centralized Health-Care Data Management

Major players vying for a slice of the medical information market include Microsoft Health Vault and Dossia. Dossia, an independent nonprofit infrastructure created by some of the largest employers in the United States, gathers and stores information for lifelong health records.

President Obama raised the expectation of benefits from centralized health-care data in the United States in terms of reduced costs and improvements in research. With the Health Insurance Portability and Accountability Act legislation, there’s also enormous pressure to protect patient privacy. Medical information is sensitive and has enormous impact that can change people’s lives if, for example, it’s used in employment decisions.

Questions have already been raised on the use of genetic markers in employment decisions. Congress addressed those questions in the Genetic Information Non-Discrimination Act. The next several years will see the tension escalate between cost containment and privacy as cloud service providers try to navigate this minefield.

While employers have an incentive to reduce health-care costs, it’s important to understand the security model: who collects the data, how is the data used, who has access to the data, and what are the risks of collecting and sharing the data? One interesting question in the context of cloud computing is, who’s responsible when there’s a problem? Who’s the custodian of the record and what happens if there’s a significant data breach or misuse? As sensitive information such as medical records move into the cloud, security concerns will certainly escalate.

Centralized Application Management

Web-hosting applications have been outsourced for at least a decade. During that time, Akamai has hosted an increasingly large percentage of time-critical files for Web site owners worldwide. Also, programmer Dave Winer worked with Microsoft to create the precursor to the Web services that have proliferated to the wide range of WS-* standards that are available today.

Web-based applications have steadily grown in importance, to the point where a new name seemed necessary for the combination of service-orientation and standardized Internet service interfaces—hence the term “cloud computing.” What’s new and different today from 10 years ago is the attention given to the value that’s available at reasonable marginal cost. A company no longer needs to develop Exchange expertise to have the benefit of Exchange services, as there are a number of vendors competing to provide that service.

For a service to migrate easily from a local location to the cloud and back again, the application needs to provide a standard service-oriented set of interfaces for use both locally and in the cloud. This is why a cloud application was initially called software as a service. The most widely adopted application-service interface standards are the WS-* protocols mentioned earlier. When a business application is undergoing a revision, it’s a good time to include time to review the application-interface specifications to see if they can fit one of the existing Web services standards.

All authorization claims and authentication identity need to be shared by all resources, whether local or cloud-based. Over time, all applications will become able to migrate to the most efficient locale to meet their customers’ expectations. At that point, moving an application is a simple matter of changing a directory entry for the application. Provisioning resources is just a base functionality of the services provider selected. The cloud provides a virtualized view of the resources that looks like a single computer, one that’s never down for services, but could in reality be hosted on many machines or shared on a single machine, as demand requires.

Any application provider needs to ensure that it doesn’t become an infection vector for malware. E-mail providers are especially attractive vectors for malware distribution, but attacks can almost be guaranteed through any channel with a public component. Forefront Protection 2010 for Exchange gives users of cloud-hosted applications the confidence that no other customer will compromise services that that they depend upon. All executables are checked before they can be loaded onto the servers and into client computers.

Federated Identity Management

Online identity has two primary manifestations these days:

  1. Government or corporations insist on a tight binding between human identity and online identity. The rise of machine-readable passports and government-issued smart cards is proof of that assertion. Active Directory is one example of this type of support.
  2. Online ID providers supply a consistent identity used to build a profile for predicting future behavior. Windows Live ID operating on the Internet with simple Turing tests (such as CAPCHA) proves that a human being is requesting the account. A simpler example is a verification code sent to an Internet e-mail account.

Depending on the application, either one or both types of identity might be provided to the cloud service to obtain authorization to access data. Every enterprise will have its own identity-management system to control access to information and computing resources. That identity should have the same weight for getting authorization to run applications in the cloud.

External identity providers will typically only verify customers or other casual users. Therefore, the cloud identity system needs to track the owner of each identity and the level of assurance that’s given to that identity. Coexistence of services in local and cloud environments is only possible when the same standard service identity interface is used for authorization in both environments.

Only a few enterprises will be interested in creating their own private cloud service. For those doing so, the cloud identity solution will need to work across all divisions and acquisitions. While it’s possible for a cloud service to create its own identity provider, such a proprietary solution would take it outside of our definition of a true cloud service.

These cases would need a federation gateway from each cloud service to link the external identity to an internal identity manager, such as Forefront Identity Manager, to provide a clean and quick authorization provider for each cloud resource completely independent of the original identity provider. Identity providers must create a list of all known sources of identity used to authorize access to resources to be sure that any cloud services provider can accommodate all of them.

Using Federated Identity

As reported in the Forefront blog, Thomson Reuters was able to provide single sign-on (SSO) access to its Treasura treasury management and related cloud services. The firm used federated identity management based on ADFS 2.0 from its customers’ corporate logon identity without having to sign in again to access the Thomson Reuters products.

Among the many identity providers supported by Treasura are Sun OpenSSO and Microsoft Active Directory. Because Windows Identity Foundation provides its application developers with the same familiar Windows development tools to provide SSO without having to write custom authentication code, Thomson Reuters expects to save an average of three months of development time.

The easiest approach to cloud authentication is exposing access only through the company’s own identity provider. That approach works as long as any user tracking is limited to the use of the company’s identity provider. As soon as customers or other partners need to have controlled access to the cloud applications or data, the enterprise is going to need a heterogeneous source of user identity. Sometimes the identity will be strong—such as national identity smart cards—but in other cases it will just provide continuity like Windows Live Identity (also called Passport).

The end-point application and data servers will need to be aware of the origin and reliability of the identity presented in such heterogeneous environments before authorizing access. Thus, for example, business-critical information can be protected with the enterprise’s own Active Directory while the external identity provider can be used for tracking customer behavior over an extended period.

ADFS is part of the Microsoft identity and security platform, as well as part of its Windows Azure cloud operating environment. ADFS, a Windows Server component, provides Web SSO technologies to authenticate a user to multiple Web applications.

ADFS 2.0 is designed to allow users to employ SSO across both cloud-hosted and on-premises applications. This gives Microsoft Online Services the ability to authenticate with corporate ID from Active Directory or Windows Live IDs. Cloud administrators will still need to have a separate ID for that functionality. ADFS 2.0, together with Windows Identity Foundation, has been known in the past by its code name “Geneva.”

Cloud Migration Assistance

There are several ways you can get help as your organization prepares to migrate to the cloud, including vendor support, security services, availability, application security, elasticity, management, privacy and private cloud services:

  • Vendor Support -- Companies specializing in enterprise security have the expertise to evaluate the many new concerns as companies migrate to cloud-computing services. Any qualified cloud-security expert will be able to create checklists and templates that enterprises can then use as they roll out new services. Be sure to create a list of requirements for any vendor that includes developing SOA to address specific security needs for the enterprise. It’s important that vendors have security expertise, as well as experience deploying secure solutions in real-world environments. Specific experience with major cloud providers like Microsoft, Google Inc. and Amazon.com Inc. will translate into a plan that can help ensure success.
  • Physical and Personnel Security -- Providers need to ensure that physical machines are adequately secure. They must also ensure that access to these machines, as well as all customer data, is not only restricted but documented. The U.S. Government Accountability Office (GAO) has published a document on the “Knowledge of Software Suppliers Needed to Manage Risks” for defense acquisitions that should provide great guidance even for commercial enterprises.
  • Service Availability -- Cloud providers need to reassure customers that they’ll have regular and predictable access to their data and applications. For the IT teams this implies an ability to “scale up” as demand rises and “scale down” as it subsides to produce an elastic computer resource that’s cost effective.
  • Application Security -- Cloud providers ensure that applications available as a service via the cloud are secure by implementing testing and acceptance procedures for outsourced or packaged application code. They also require that application-security measures (application-level firewall and database auditing) be in place in the production environment.
  • Elastic Computing -- Elasticity is the “true golden nugget of cloud computing and what makes the entire concept extraordinarily evolutionary, if not revolutionary,” says Dustin Owens, in the June 2010 issue of the Communications of the ACM.The National Institute of Standards and Technology (NIST) captures this important characteristic in its definition of cloud computing (V15): “Cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.”
  • Management -- Newer versions of management tools can bridge the gap between applications and data that are shared between local and cloud resources. This capability is only effective when it can range from the enterprise to the cloud. For example, the next version of System Center Configuration Manager is going to support “multiple device types” and let users “seamlessly access their data from virtually anywhere, across multiple device types while providing IT with unified management tools and centralized control,” according to a new blog post on the System Center team blog.
  • Privacy -- Finally, providers ensure that all critical data (credit card numbers, for example) is masked and that only authorized users have access to data in its entirety. Moreover, digital identities and credentials must be protected—as should any data that the provider collects or produces about customer activity in the cloud.

Private Clouds

Many governments have enacted laws, regulations and certification programs aimed at protecting their citizens’ privacy and their national interests. The result has been limited use of publicly available clouds for many applications that handle data that’s protected by regulation.

For example, certifications of national bodies like the Federal Information Security Management Act, or FISMA, implementation project run by NIST will need to be considered in addition to other compliance requirements. In response, some cloud services providers are creating clouds dedicated to use by U.S. federal agencies or other governmental bodies to make compliance checking easier.

With the snowstorms during winter 2010, even the U.S. federal government found itself shut down in Washington, DC. Suddenly, the idea of continued operation of government services from home or remote sites is no longer unthinkable. The risk of release of large quantities of private information acts as a counterweight to hold back exposing more data to the Internet at a time when the current trend is to limit the federal government’s exposure.

As another example, local governments like the city of Newark, N.J., have more freedom to find cost-effective solutions that don’t require heavy capital expenditures to make city employees more productive with a common set of tools and easy collaboration. “In the City of Newark, we’re focused on ensuring that our IT modernization and cost-saving programs exceed the mayor’s overall objectives of renewing government,” says Michael Greene, CIO of the city of Newark.

A number of independent software vendors have chosen to make their offerings available on cloud platforms like Windows Azure. This already provides credibility with government agencies. Now, the Windows Azure cloud has become an application platform that benefits both the developer community and governmental users of dedicated clouds.

Email Dan Griffin

Dan Griffinis a software security consultant based in Seattle. He can be reached at www.jwsecure.com

 

Email Tom Jones

Tom Jonesis a software architect and author specializing in security, reliability and usability for networked solutions for financial and other critical cloud-based enterprises. His innovations in security span a full range from mandatory integrity to encrypting modems. He can be reached attom@jwsecure.com.