You can share IT solutions between the fixed cost of local resources and the variable cost of cloud resources without losing control of access to enterprise assets. Let us show you how.
When our children were young, we kept them safe at home. When they had learned to fend for themselves, we let them venture forth. It’s a similar situation with enterprise assets. We’ve traditionally protected them within the perimeter of the network. We put up firewalls to ensure those assets didn’t leave the premises.
When you tell an IT manager he can share his local computing load with on-demand cloud-based resources, the first reaction is excitement at the possible cost savings and user experience improvements. But like an overprotective parent, that excitement often turns to skepticism and anxiety about the new challenges of securing enterprise assets across multiple control points.
Data processing has already moved from the enterprise datacenter to PCs spread across the world. The next logical step is to move the enterprise data and applications from within the enterprise firewall to where they’ll be closer to the business users who need them. That means moving to the cloud.
To reap the benefits of cloud computing without the accompanying anxiety, you need to establish distributed access control to match the distributed content and distributed applications. Here, we’ll outline the steps you need to take to ensure reliability and control as your data and applications move beyond the enterprise perimeter.
The first step in establishing an anxiety-free cloud deployment is to create a diagram showing the application and data flow. For the design to be service-oriented, each application needs to operate as a service that users can access either locally or in the cloud. Similarly for data, the location of the data should not be specified in the application. You need to be able to configure that location when you deploy the application. You can see in Figure 1 how the components of the IT environment relate with the applications and data sourced on either local or cloud resources.
Figure 1 A look at the architecture of application and data flows.
Your development team sources the application executables. You can have them apply those directly from the vendor, but you can exert more control if you first bring all application code and updates into the enterprise and have it distributed from there. Data migrates from the client machine to either the enterprise or cloud data stores, which are shown as SharePoint servers in Figure 1.
When applications access data, that action is authorized by access-control mechanisms local to each data store. The integrity of the application executables and the enterprise data needs extra consideration as it moves beyond the perimeter and into the cloud. The ideal and most flexible management situation is when you can manage local and cloud resources as a single entity that can dynamically respond to resource requests.
The first step in justifying any cloud deployment is determining the return on investment. You typically classify cost as set-up or conversion that includes commissioning the new services, training, and decommissioning the old services. Return is expressed as reduced cost-per-month and the number of months to recoup the investment. More sophisticated analysis includes discounted cash flow analysis, but if the return is less than two years, that likely adds no real value to the decision process.
The real value of cloud deployments comes from intangible benefits such as improved responsiveness to fluctuations in demand for services and improved cost control. Consider these types of costs from the perspective of the IT department:
You can justify using cloud services for semi-variable costs for reasons such as off-loading payroll services to a dedicated provider. In payroll, as in e-mail, the rules change rapidly—the software needs constant updates and the expertise to perform these functions is expensive. While it’s more difficult to justify cloud provisioning based on semi-variable costs, the results can still be positive and can help IT focus on the real mission of delivering value to the enterprise products.
Most IT executives think cloud computing is a way to reduce capital expenditures by using virtualization technology. Many vendors tack the word “cloud” onto any Internet service. For our purposes here, we’re using the Gartner Inc. description of how the cloud came to be so important to business: “the commoditization and standardization of technologies, in part to virtualization and the rise of service-oriented software architectures, and most importantly, to the dramatic growth in popularity of the Internet.” This is important in four specific areas:
In 2007, Gartner began telling security conferences it was time to abandon the hardened perimeter boundary between the enterprise and the Internet. Even at that time, experts were arguing that enterprise boundaries were already porous. Perimeters had become irrelevant to the task of keeping out intruders, so access control was required with every IT service. Security de-perimeterization is the current reality. To be truly secure, only the server that contains data can ultimately control access.
Still, it isn’t rational to manage access at every server, because many deployments contain hundreds or even thousands of servers. IT can’t really determine data rights and access rules. IT can, however, establish a role-management system with which business owners can permit or deny access relevant to business objectives.
The regulatory environment has become increasingly stringent both for data modification and data access. This requires a new paradigm: one that will allow data to migrate to whichever server is best able to service access requests, while ensuring compliance at reasonable cost. Here are some requirements to consider for data management in a cloud environment:
Starbucks Corp. found that the cost and delay of physical (paper-based) distribution of current pricing, business analysis and news was not cost-effective. As a result, it now supports SharePoint for its network of 16,000 locations. That SharePoint site has become a business-critical communications channel through which employees can get current information, with the ability to search quickly for the information that they need when they need it.
Availability and reliability is tracked with Microsoft System Center Operations Manager (SCOM)and other analytic tools. Because SharePoint supports both internal and external network connections, the server locations can adapt to suit the current network topology without concern for local, cloud or mixed environments. This deployment has enabled Starbucks to realize the following benefits:
Any data store must be prevented from becoming an infection vector for viruses or spyware. Data types, like executables and compressed or encrypted files, can be blocked for a variety of integrity and compliance concerns. Microsoft employee David Tesar blogged about some of the business reasons to protect SharePoint using Forefront Protection 2010 for SharePoint, which was released in May 2010.
To ensure full protection, data from one customer must be properly segregated from that of another. It must be stored securely when “at rest” and able to move securely from one location to another (security “in motion”). IT managers must ensure that cloud providers have systems in place to prevent data leaks or access by third parties. This should be part of an SLA. Proper separation of duties should ensure that unauthorized users can’t defeat auditing and/or monitoring—even “privileged” users at the cloud provider. Figure 2 shows the various data transitions susceptible to outside attack.
Figure 2 The relationships of data and trust transitions
The new attack points against enterprise desktops and servers using the Internet or physical media include:
As the amount of data increases, the time to filter this data or the cost to increase storage capacity can be significant. The data keyword and file-filtering available with Forefront Protection 2010 for SharePoint lets you control the type of data you allow on the SharePoint server and provide reporting on what types of files are present. This functionality can reduce costs by not requiring additional storage capacity and by helping to prevent data leaks.
For instance, if you have a publicly accessible SharePoint server in your company, you can enable keyword file-filtering to prevent anything with the words “confidential” or “internal only” inside the files. You can even specify the threshold of how many times these words show up before you disallow them from being posted.
Rights Management Services (RMS) is also an effective addition to your defense-in-depth strategy, protecting the documents themselves regardless of where they’re stored or downloaded. Most commercial applications don’t need this level of protection, but it can be helpful for some particularly sensitive documents, like financial or acquisition plans prior to public release. Since the release of Windows Server 2008, RMS is a role in Active Directory.
A full audit trail will be required for any forensic investigation, resulting in a huge amount of data. You can enable Audit Collection Services (ACS), an add-in for SCOM, on high-risk resources to pull all of the audit records as they’re generated to a central place for secure storage and analysis. This configuration will prevent attackers from tampering with the forensic data, even if the attackers have high privilege.
The “Trust” arrow in Figure 2 indicates this important flow of authentication and authorization information, explored later in this article in the section “Federated Identity Management.”
Major players vying for a slice of the medical information market include Microsoft Health Vault and Dossia. Dossia, an independent nonprofit infrastructure created by some of the largest employers in the United States, gathers and stores information for lifelong health records.
President Obama raised the expectation of benefits from centralized health-care data in the United States in terms of reduced costs and improvements in research. With the Health Insurance Portability and Accountability Act legislation, there’s also enormous pressure to protect patient privacy. Medical information is sensitive and has enormous impact that can change people’s lives if, for example, it’s used in employment decisions.
Questions have already been raised on the use of genetic markers in employment decisions. Congress addressed those questions in the Genetic Information Non-Discrimination Act. The next several years will see the tension escalate between cost containment and privacy as cloud service providers try to navigate this minefield.
While employers have an incentive to reduce health-care costs, it’s important to understand the security model: who collects the data, how is the data used, who has access to the data, and what are the risks of collecting and sharing the data? One interesting question in the context of cloud computing is, who’s responsible when there’s a problem? Who’s the custodian of the record and what happens if there’s a significant data breach or misuse? As sensitive information such as medical records move into the cloud, security concerns will certainly escalate.
Web-hosting applications have been outsourced for at least a decade. During that time, Akamai has hosted an increasingly large percentage of time-critical files for Web site owners worldwide. Also, programmer Dave Winer worked with Microsoft to create the precursor to the Web services that have proliferated to the wide range of WS-* standards that are available today.
Web-based applications have steadily grown in importance, to the point where a new name seemed necessary for the combination of service-orientation and standardized Internet service interfaces—hence the term “cloud computing.” What’s new and different today from 10 years ago is the attention given to the value that’s available at reasonable marginal cost. A company no longer needs to develop Exchange expertise to have the benefit of Exchange services, as there are a number of vendors competing to provide that service.
For a service to migrate easily from a local location to the cloud and back again, the application needs to provide a standard service-oriented set of interfaces for use both locally and in the cloud. This is why a cloud application was initially called software as a service. The most widely adopted application-service interface standards are the WS-* protocols mentioned earlier. When a business application is undergoing a revision, it’s a good time to include time to review the application-interface specifications to see if they can fit one of the existing Web services standards.
All authorization claims and authentication identity need to be shared by all resources, whether local or cloud-based. Over time, all applications will become able to migrate to the most efficient locale to meet their customers’ expectations. At that point, moving an application is a simple matter of changing a directory entry for the application. Provisioning resources is just a base functionality of the services provider selected. The cloud provides a virtualized view of the resources that looks like a single computer, one that’s never down for services, but could in reality be hosted on many machines or shared on a single machine, as demand requires.
Any application provider needs to ensure that it doesn’t become an infection vector for malware. E-mail providers are especially attractive vectors for malware distribution, but attacks can almost be guaranteed through any channel with a public component. Forefront Protection 2010 for Exchange gives users of cloud-hosted applications the confidence that no other customer will compromise services that that they depend upon. All executables are checked before they can be loaded onto the servers and into client computers.
Online identity has two primary manifestations these days:
Depending on the application, either one or both types of identity might be provided to the cloud service to obtain authorization to access data. Every enterprise will have its own identity-management system to control access to information and computing resources. That identity should have the same weight for getting authorization to run applications in the cloud.
External identity providers will typically only verify customers or other casual users. Therefore, the cloud identity system needs to track the owner of each identity and the level of assurance that’s given to that identity. Coexistence of services in local and cloud environments is only possible when the same standard service identity interface is used for authorization in both environments.
Only a few enterprises will be interested in creating their own private cloud service. For those doing so, the cloud identity solution will need to work across all divisions and acquisitions. While it’s possible for a cloud service to create its own identity provider, such a proprietary solution would take it outside of our definition of a true cloud service.
These cases would need a federation gateway from each cloud service to link the external identity to an internal identity manager, such as Forefront Identity Manager, to provide a clean and quick authorization provider for each cloud resource completely independent of the original identity provider. Identity providers must create a list of all known sources of identity used to authorize access to resources to be sure that any cloud services provider can accommodate all of them.
As reported in the Forefront blog, Thomson Reuters was able to provide single sign-on (SSO) access to its Treasura treasury management and related cloud services. The firm used federated identity management based on ADFS 2.0 from its customers’ corporate logon identity without having to sign in again to access the Thomson Reuters products.
Among the many identity providers supported by Treasura are Sun OpenSSO and Microsoft Active Directory. Because Windows Identity Foundation provides its application developers with the same familiar Windows development tools to provide SSO without having to write custom authentication code, Thomson Reuters expects to save an average of three months of development time.
The easiest approach to cloud authentication is exposing access only through the company’s own identity provider. That approach works as long as any user tracking is limited to the use of the company’s identity provider. As soon as customers or other partners need to have controlled access to the cloud applications or data, the enterprise is going to need a heterogeneous source of user identity. Sometimes the identity will be strong—such as national identity smart cards—but in other cases it will just provide continuity like Windows Live Identity (also called Passport).
The end-point application and data servers will need to be aware of the origin and reliability of the identity presented in such heterogeneous environments before authorizing access. Thus, for example, business-critical information can be protected with the enterprise’s own Active Directory while the external identity provider can be used for tracking customer behavior over an extended period.
ADFS is part of the Microsoft identity and security platform, as well as part of its Windows Azure cloud operating environment. ADFS, a Windows Server component, provides Web SSO technologies to authenticate a user to multiple Web applications.
ADFS 2.0 is designed to allow users to employ SSO across both cloud-hosted and on-premises applications. This gives Microsoft Online Services the ability to authenticate with corporate ID from Active Directory or Windows Live IDs. Cloud administrators will still need to have a separate ID for that functionality. ADFS 2.0, together with Windows Identity Foundation, has been known in the past by its code name “Geneva.”
There are several ways you can get help as your organization prepares to migrate to the cloud, including vendor support, security services, availability, application security, elasticity, management, privacy and private cloud services:
Many governments have enacted laws, regulations and certification programs aimed at protecting their citizens’ privacy and their national interests. The result has been limited use of publicly available clouds for many applications that handle data that’s protected by regulation.
For example, certifications of national bodies like the Federal Information Security Management Act, or FISMA, implementation project run by NIST will need to be considered in addition to other compliance requirements. In response, some cloud services providers are creating clouds dedicated to use by U.S. federal agencies or other governmental bodies to make compliance checking easier.
With the snowstorms during winter 2010, even the U.S. federal government found itself shut down in Washington, DC. Suddenly, the idea of continued operation of government services from home or remote sites is no longer unthinkable. The risk of release of large quantities of private information acts as a counterweight to hold back exposing more data to the Internet at a time when the current trend is to limit the federal government’s exposure.
As another example, local governments like the city of Newark, N.J., have more freedom to find cost-effective solutions that don’t require heavy capital expenditures to make city employees more productive with a common set of tools and easy collaboration. “In the City of Newark, we’re focused on ensuring that our IT modernization and cost-saving programs exceed the mayor’s overall objectives of renewing government,” says Michael Greene, CIO of the city of Newark.
A number of independent software vendors have chosen to make their offerings available on cloud platforms like Windows Azure. This already provides credibility with government agencies. Now, the Windows Azure cloud has become an application platform that benefits both the developer community and governmental users of dedicated clouds.