(0) exportieren Drucken
Alle erweitern
EN
Dieser Inhalt ist in Ihrer Sprache leider nicht verfügbar. Im Folgenden finden Sie die englische Version.

Enabling Information Security through HBI Information Classification

Technical Case Study

Published: December 2007
Updated: November 2009

To prevent the inadvertent disclosure of High Business Impact (HBI) information, the Microsoft IT group designed and implemented a system using Microsoft technologies in conjunction with a third-party solution that automatically identifies and classifies HBI information at risk, and then starts the remediation process.

Download

Download Technical Case Study, 1.3 MB, Microsoft Word file

Download TDM Webcast, TDM Webcast, WMA, MP3

Situation

Solution

Benefits

Products & Technologies

Information security processes that included manual methods of inspection and remediation could not reach all HBI content across the Microsoft network. Microsoft IT needed an automated, service-oriented solution to help reduce the biggest operational risk, the loss of sensitive information.

A highly scalable content-scanning solution based on a grid computing architecture, coupled with automated tagging, remediation, digital-rights-management protection, and workflow components, helps users identify, classify, and protect their sensitive data in various locations across the Microsoft network.

  • Nearly all SharePoint sites and file shares are automatically scanned for HBI content on a regular basis, compared to less than 1 percent achievement via manual methods.
  • Fewer personnel are required to perform identification and remediation of HBI information.
  • The system automatically generates remediation actions or notifications to information owners as it discovers content issues.
  • Process improvements have led to better compliance with data-handling standards, more efficient handling of sensitive information, and a significant reduction of overall risk associated with the loss of sensitive information.
  • RSA DLP Datacenter 7.0.2
  • RSA DLP Network
  • .NET Framework
  • Windows Compute Cluster Server 2003
  • Microsoft SQL Server 2005
  • Microsoft Office SharePoint Server 2007
  • Windows Server 2008 R2
  • Active Directory Rights Management Services

The security of sensitive information is one of the greatest concerns that many companies face today. The loss or theft of HBI information is of particular concern, because it can expose the company to an information breach that can potentially cause a loss in revenue, productivity, reputation, brand value, or even a company's competitive advantage if the information includes important intellectual property (IP).

This paper describes the approach, design, implementation, and benefits of the technical solution used at Microsoft. The paper also provides suggested best practices so that Microsoft customers can benefit from the lessons that Microsoft IT learned. This paper is intended for IT professionals who design and manage compliance systems, in addition to risk managers and compliance auditors.

Situation

At Microsoft, a very large volume of data is dispersed over IT-managed Microsoft SharePoint® sites and IT-managed file shares where HBI information may reside. Microsoft IT needed a technology-based solution to identify information that might be at risk and then help prevent the unauthorized disclosure—whether inadvertent or malicious—of this information.

Microsoft has long had content policies in place in accordance with regulatory and corporate mandates. The missing component was an automated identification, monitoring, and protection mechanism through which line-of-business owners and end users could confirm their compliance with policies and guidelines at a detailed level for handling that content. The high volume of information at Microsoft made manual inspection of all SharePoint sites and file shares, and manual notification of policy compliance issues to information owners or custodians of HBI information, an impossible task.

The challenge was how to deliver this missing capability across the global organization without installing a huge new IT infrastructure or incurring enormous costs. Before the development of the information classification solution, Microsoft IT was concerned about the potential for unintended accessibility of HBI information to a wide range of Microsoft personnel. These personnel included those who simply used basic search tools to gather information in the daily course of their work. At the same time, Microsoft IT wanted to raise end users' awareness about:

  • The risks of unsecured HBI information
  • How they could help ensure the security of such sensitive information
  • How they could use Active Directory® Rights Management Services (AD RMS) to encrypt the information

The implementation of a data loss prevention (DLP) system can affect a high volume of systems, content, employees, and business processes. Many chief information officers, chief information security officers, and chief security officers therefore struggle with identifying how and where to start. However, implementing a technology to prevent the loss or misuse of sensitive content is just one part of the solution. In fact, an organization must address an entire set of business processes and operations to prepare for such an implementation and to manage the incidents and intelligence that arise from the use of this technology. The most effective DLP efforts are those that an organization meticulously plans and implements based on a deep understanding of its most important content governance, risk, and compliance challenges.

For large enterprises, the technical aspects of a DLP and AD RMS solution can play a critical role in enabling automation of discovery and remediation activities, such as encryption of sensitive information. DLP solutions that use Microsoft technologies, coupled with automatic AD RMS protection, can enable an enterprise to scan, classify, and protect enormous volumes of information in a timely and regularly scheduled manner. The enterprise can then focus valuable human resources on remediation and education efforts. Automation of these otherwise time-intensive activities also enables the creation of repeatable, service-oriented operations processes with the lowest possible total cost of ownership (TCO) for the solution.

Solution

In 2006, Microsoft IT initiated a DLP project to address content security and compliance objectives at Microsoft regarding HBI information, while minimizing the impact to business operations. Microsoft IT designed and implemented the HBI information and protection program by using Microsoft technologies such as Microsoft Office SharePoint Server 2007 in conjunction with a third-party application. In 2009, Microsoft IT augmented this solution with the integration of automatic AD RMS protection of HBI material.

The solution automates the identification, classification, and AD RMS protection of HBI information at risk, in addition to a portion of the subsequent remediation process. It enables users to effectively classify and help protect HBI information on SharePoint sites and in file shares according to Microsoft data-handling standards. The third-party part of the solution, RSA® DLP Datacenter 7.0.2, is built on the Microsoft .NET Framework, Windows® Compute Cluster Server 2003, and Microsoft SQL Server® 2005 database software.

Microsoft IT subsequently added to the solution by implementing network-based scanning by using RSA DLP Network. The network-based scanning technology can scan various communication stack protocols for sensitive information (such as personally identifiable information, credit card information, and health care information) in accordance with Microsoft's data-handling standards.

Before embarking on designing and implementing the technical solution, Microsoft IT spent a considerable amount of time and effort defining the scope and approach of the project. Microsoft IT's primary goals were to:

  • Develop an initial solution for HBI data, which could then be expanded to various classifications of data in various locations.
  • Establish an effective, repeatable service while minimizing impact to daily business activities.
  • Make the HBI project widely recognizable across the business.

Solution Approach

"One of the primary project objectives was to establish the DLP capability as a valued service within the larger IT organization in accordance with the Microsoft Operations Framework."

Olav Opedal, Sr. Program Manager

As part of a global corporation with approximately 94,000 employees working in more than 500 offices around the world, Microsoft IT realized that it needed to approach HBI information security in incremental steps and address multiple and sometimes competing requirements. Microsoft IT's information security policies had to incorporate regulatory compliance requirements in addition to the protection of intellectual property, which considerably broadened the scope of the HBI project. As part of this approach, Microsoft IT had to expand the corporate taxonomy to the sensitivity of the information in addition to the standard taxonomy. The corporate taxonomy is used to enable users to search for content. The security taxonomy enables meta data tagging on HBI material (sensitive information) based on the data-handling standard—PII, PCI, SOX, and so on.

Many organizations focus first on network monitoring-based solutions to prevent unwanted transmission of HBI information. But Microsoft IT realized that catching data in motion would be overly costly and ineffective without first addressing the root of the problem—gaining visibility and control over HBI data at rest. Therefore, the first phase of the project addressed discovery and classification. The second phase addressed data in motion and automatic protection.

Microsoft IT established the following objectives for the first phase:

  • Create a data taxonomy to define what HBI content is.
  • Identify the location of HBI content across the network.
  • Reduce the volume of HBI content that could move across the network or be used on workstations.
  • Implement ownership and access controls for HBI content.
  • Understand and address the business processes that contribute to the sprawl of HBI content.

Establish the DLP capability as a valued service within the larger IT organization in accordance with the Microsoft Operations Framework (MOF).

Microsoft IT established the following objectives for the second phase:

  • Identify the location of major egress points across the network.
  • Help protect HBI content that might move across the network or be used on workstations.
  • Implement an extension to the corporate taxonomy.

On a strategic level, the MOF posits that IT groups, including IT security groups, must clearly focus on supporting the business objectives of the organization and emphasizes the business value that IT provides. The idea is that IT can help reduce risks and enable new ways of doing business. In addition, IT systems and services are more effectively managed when they are regarded as an asset to the development and implementation of key business strategies. This approach requires IT groups to demonstrate how their services make specific, tangible, and critical contributions to achieving business outcomes.

Note: For more information about the MOF, visit http://www.microsoft.com/technet/solutionaccelerators/cits/mo/mof/mofeo.mspx.

In the context of content security at Microsoft, Microsoft IT created a project plan. The goal of this plan was to demonstrate that the proposed DLP strategies and technologies were the most effective means of achieving compliance and maintaining policy mandates now and well into the future. The plan focused on rapidly advancing the maturity of the DLP service from a basic level, where the new IT infrastructure is generally considered a cost center, to a fully mature level. At that level, the business value of the IT infrastructure is clearly understood and viewed as a strategic business asset and enabler within the first three months after implementation.

From a scope perspective, Microsoft IT decided to start by inventorying HBI information within the enormous volume of content stored across the network file shares and SharePoint sites at Microsoft. The team decided to develop an automated scanning tool and apply it to those data repositories to identify HBI information. Remediation of issues with HBI information would then follow, including limiting broad access, asset classification, asset lockdown, asset removal, and data rights management with encryption. This approach would also become the framework that would eventually include all data assets in motion and, potentially, additional classifications of data, such as Medium Business Impact (MBI) or Low Business Impact (LBI) information.

In defining the scope and approach for the project, Microsoft IT adopted the following methodology:

  1. Develop proof of concept
  2. Conduct risk analysis
  3. Design and build
  4. Pilot and deploy
  5. Provide service management

The approach that Microsoft IT followed for the initial project was to detect and protect information using DLP and custom tools built within IT. The custom tools allowed users to classify their file shares. This manual classification was discontinued with the automatic classification made possible by AD RMS and the new File Classification Infrastructure (FCI) feature in Windows Server® 2008 R2. SharePoint still has manual classification, but this will also change with the deployment of SharePoint 2010. The initial and current solution supports control requirements to mitigate critical information security risk for data at rest. It required the design of compliance modules and deployment of an existing incident-tracking and remediation tool integrated with the Microsoft Service Enterprise (MSE) ticketing system used throughout Microsoft. The compliance modules focus on specific types of search and remediation activities, such as scanning and locking down SharePoint sites or identifying specific types of intellectual property, such as source code in various locations.

The deployment of these combined elements provides precise, automated detection of HBI data at rest in documents located on SharePoint sites and file shares, or elsewhere, and methods to quickly remediate potential issues. In general, after an organization identifies issues with HBI information, it has a duty to address those issues and safeguard that information. The remediation component of the solution was therefore crucial to implementing a complete solution.

Because of the large volume of content at Microsoft, and the heavy reliance on SharePoint sites that facilitate the sharing of information and that have varying levels of data owners and users, Microsoft IT had to transform high-level corporate policies into detailed guidelines for how to apply content security to IT services. This transformation required close collaboration between IT service owners, the corporate legal department, and various other stakeholders to determine appropriate remediation steps.

The parameters that Microsoft IT developed for its initial discovery required the ability to define stringent criteria for automated content evaluation. With enormous data loads and thousands of locations to scan, enterprise scalability, performance, and accuracy were all top considerations. Precision of content detection, in particular, was a concern. Microsoft IT wanted a system that would reliably catch most at-risk content while maintaining a very low rate of false positives. Previous research that Microsoft IT conducted indicated that systems that generate high false positives require much higher levels of human intervention, resulting in a much higher TCO.

Finally, Microsoft IT developed an education campaign to improve end users' awareness and understanding of their role in helping to ensure compliance and the security of the company's sensitive digital information assets. With a strong focus on enforcing Microsoft data-handling policies and standards, the compliance framework that the HBI project established is a major component for the articulation of IT governance in Microsoft IT.

Solution Technical Design

Accuracy, performance, and scalability are the three most important attributes in an enterprise content-scanning solution and the HBI project in particular. Microsoft IT evaluated the third-party RSA DLP Datacenter product in a proof-of-concept phase. In 2006, after a successful proof of concept and extensive risk analysis in conjunction with the business owners, Microsoft IT selected RSA DLP Datacenter 7.0.2 as the core content-scanning tool for the solution. This application enables Microsoft IT to identify and classify HBI data in the Microsoft environment. The core intellectual property of the application is a content-analysis engine that evaluates information assets by using a variety of techniques to identify confidential data. These techniques include searching for specific keywords, phrases, or entities; identifying patterns in data; and analyzing the context in which a suspicious string is detected.

In 2008, RSA and Microsoft enabled RSA DLP customers to use the built-in protection services of AD RMS for any content classified as HBI. Over time, the process of detecting HBI information coupled with AD RMS protection will increase the number of documents protected with AD RMS since user action is no longer needed to encrypt the documents.

With enormous volumes of data to scan and remediate, the infrastructure that supports the automated scanning tool must be high performance and highly scalable. The RSA DLP Datacenter engine is built on the Microsoft .NET Framework and can run on the Windows Compute Cluster Server 2003 operating system to create a grid computing architecture that allows for capacity expansion by adding servers to the grid of compute clusters. Figure 1 shows how compute clusters are located in various regions where significant amounts of data reside.

Figure 1. Windows Compute Cluster Server 2003 grid architecture by Domains for RSA DLP Datacenter

Figure 1. Windows Compute Cluster Server 2003 grid architecture by Domains for RSA DLP Datacenter

Microsoft IT dedicated 10 load-balanced grid computers to scan all of the SharePoint sites and file shares connected to its storage area network (SAN). A lightweight agent was automatically deployed to scan contractor workstations in Asia and the stand-alone file shares not connected to the SAN. The RSA DLP Datacenter scanning activities are coordinated through the enterprise controller. The site connectors for each location manage both grid computers and lightweight agents.

The grid computers are permanent components of the infrastructure. Microsoft IT uses them to scan large, centrally located data stores. The lightweight agents deploy temporarily to workstations or servers where content resides in remote locations, and then remove themselves after each scanning activity. The results of the scans are stored in a SQL Server 2005 database at each location, and then combined into the SQL Server 2005 enterprise results database. Approximately 1 percent of the data at rest changes daily. Incremental scans of the systems analyze only new, moved, edited, or renamed files.

File Classification Infrastructure (FCI)

The front-end file servers are now being upgraded to the Windows Server 2008 R2 operating system, which includes the FCI feature. Microsoft IT uses the content-detection capability in FCI to tag information for personally identifiable information (PII) in a
sub-segment of the Redmond File Server Utility. The Redmond File Server Utility is the File Storage service provided to users and business units at Microsoft. This information is automatically AD RMS encrypted via the Bulk RMS Tool for files tagged with HBI PII. The Bulk RMS Tool allows for entire shares to be protected with AD RMS encryption. Deployed with FCI, specific files tagged with meta data (HBI data, for example) will be automatically
AD RMS encrypted. The current deployment consists of 6,500 directories. When RSA DLP is fully integrated with the FCI feature, Microsoft IT expects to deploy this feature and capability to all of its managed file servers.

RSA DLP and AD RMS Integration Planning

Microsoft IT needed to carefully plan the integration of AD RMS encryption and the RSA DLP product. It evaluated whether the number of documents that needed AD RMS encryption would have an impact on the infrastructure. Microsoft IT also had to plan communication to the share owners who would be affected, and who would field questions from the users.

Microsoft maintains two installations of AD RMS. One installation is in the forest where Microsoft Exchange testing occurs. The other installation supports the other forests and domains at Microsoft. The Exchange forest is where early deployments occur for new
AD RMS functionality, so it made sense to start there. However, this infrastructure must support requests from other forests and domains that share messages or documents.

To support multiple-forest requests for access to information encrypted by AD RMS, Microsoft IT set up a virtual Internet Information Server (IIS) directory that is dedicated to the integration of the RSA DLP product and AD RMS. Microsoft IT also had to ensure that the proper e-mail and service accounts were enabled and had the right permissions on the IIS server that hosts the AD RMS application. Microsoft IT found that regular communication between the teams that were involved in the effort was essential to troubleshoot any permission issues that occurred during setup.

For RSA DLP Datacenter, one of the key decisions that Microsoft IT made was which type of document should receive automatic protection, and which templates should be used for the protection. Microsoft IT decided to use AD RMS encryption for the most sensitive documents first. Currently, Microsoft IT is applying AD RMS encryption only to documents that contain highly sensitive personal information. Business intelligence, intellectual property, and financial documents will not receive AD RMS protection until later in the fall of 2009.

The value that this integration provides to Microsoft is increased protection of documents that contain sensitive information. The information protection is persistent and follows the document whether it is downloaded to a laptop, desktop computer, or mobile phone; is sent in e-mail; or is otherwise in transit within or outside the network.

Compliance Modules

Although RSA DLP Datacenter provides a content-analysis engine for the solution, Microsoft IT needed to create or customize additional components to automate as many processes as possible. Based on the business requirements, Microsoft IT identified the need for the following technical components, called compliance modules, for automated tool development:

  • SharePoint Lockdown
  • File Share Lockdown (FCI)
  • WinSE IP Identification

Microsoft IT created modules based on custom Web services and Office SharePoint Server 2007 workflow capabilities. These modules enable content classification, automated lockdown, and remediation notification for the two main content sources—SharePoint sites and file shares. The file share classification part of the original modules was discontinued since the new FCI capabilities in Windows Server 2008 R2 enable classification directly to the file. For SharePoint, IT still uses the custom-built SharePoint classification and lockdown modules with the content-scanning solution to lock down and reclassify content appropriately. This has now been augmented with the capabilities in RSA DLP Datacenter 7.0.2 to include AD RMS protection of files. Figure 2 shows the high-level process now used with AD RMS protection and FCI.

Figure 2. High-level process using FCI and AD RMS to protect files

Figure 2. High-level process using FCI and AD RMS to protect files

Figure 3 shows the high-level automated workflow actions for AD RMS protection on file shares.

Figure 3. Utilizing AD RMS and RSA DLP to protect HBI information

Figure 3. Utilizing AD RMS and RSA DLP to protect HBI information

SharePoint Lockdown Compliance Module

The SharePoint Lockdown module helps lock down IT-managed SharePoint sites by using a three-pronged strategy:

  • Content monitoring to identify sensitive content
  • Classifying data by classifying SharePoint sites
  • Enforcing higher levels of access controls on HBI data

WinSE IP Identification Compliance Module

Microsoft IT also developed a compliance module to detect and remediate unsecured source code on vendor-assigned desktop computers. The WinSE IP Identification module includes the following capabilities:

  • Rules to identify Windows source code
  • A workflow to approve identification of source code

Solution Implementation

The initial content scan to locate and remediate HBI content focused on 12 terabytes of content across the file shares and SharePoint sites located in a single data center—the Redmond data center. That initial scan took only nine days to complete. After three months, the total volume scanned was up to 75 percent of the HBI content across the file shares and SharePoint sites worldwide. Microsoft IT completed 100 percent of scanning for the HBI portion of the project in September 2007, when the total scanned content exceeded 100 terabytes. In spring of 2009, testing began on the new capabilities of combining RSA DLP Datacenter with AD RMS. Microsoft IT expects to deploy the combined solution beyond the initial forest in the fall of 2009. Microsoft IT progressed from initial deployment to an established IT service in just 90 days. Incremental scans now occur on a scheduled basis.

As a critical part of the implementation, Microsoft IT pursued a range of awareness and outreach efforts to internal customers. Because long-term success depends on building a culture of compliance across the company, Microsoft IT created a broad awareness of, and ultimately demand for, content discovery and other services built around remediation. The promotional tactics included poster campaigns, e-mail, and newsletter notices that educated users on HBI, MBI, and LBI data.

In all cases, these marketing messages educated end users on compliance priorities and emerging capabilities. For instance, Microsoft IT sent e-mail that alerted users to the availability of content-scanning and remediation capabilities for individual business users as RSA DLP Datacenter scanning capabilities came online. Ultimately, all these efforts fostered awareness among end users that they are frontline data custodians and play a lead role in maintaining policy compliance.

The solution further empowers a culture of compliance within Microsoft by involving line-of-business owners, content owners, and others in remediation of security issues. When an RSA DLP Datacenter scan of a particular network share reveals a highly sensitive document that has been misclassified as LBI, the system automatically protects the document after an initial notification to the owner.

Microsoft IT also implemented a non-compliance amnesty program. Users were able to use RSA DLP Datacenter to scan their laptops, desktop computers, or other kinds of systems on their own, and then remediate any issues. This subtle societal pressure helps the company progress toward its goal of cultural change. Rather than trying to implement technology unilaterally, the self-scan empowers users across the company to support security objectives. It also encourages people who would otherwise be hard to reach through direct on-network scanning to appropriately manage the sensitive content on their systems in compliance with corporate policy.

Best Practices

Through designing, planning, and implementing the HBI solution, Microsoft IT developed the following best practices.

Prioritize Content According to Governance, Risk, and Compliance

The first step in a DLP effort is to assess enterprise content: what it is, how much of it the organization has, how it is used, and where it is located. Table 1 provides some basic guidelines for evaluating content.

Table 1. Content Evaluation Guidelines

Inventories

Purpose

Types of content that are or should be classified as sensitive

Begin to understand what content requires protection

Locations where content resides

Outline and quantify the systems that need to be monitored

Business functions that require access to this content

Understand how the content is currently used to keep business flowing

Individuals, by business function, who require access to this content

Learn which individuals can potentially access and expose sensitive content

To understand what content must be protected and how it should be protected, an organization first needs to clearly understand any industry or government regulations with which it must comply. The organization should start by listing the regulations that pertain to the business and then any business governance requirements that exist for the protection of content that is most sensitive. In other words, each type of content requires an evaluation of the impacts of a potential breach. The goal is to prioritize risks and address the most serious threats first. An organization best accomplishes prioritization through a thorough understanding of risk in the context of business impact and content type.

When reviewing content, an organization should keep in mind that the scanning engine looks for words or patterns. Building a taxonomy that includes the words and patterns is important. Understanding what the organization needs to protect will allow the operations team to build a taxonomy to be used in content scanning. This list should be reviewed periodically to ensure that it is up-to-date and still consistent with the organization's needs and regulatory directives.

Build a Project Plan to Establish the Solution as an Operational Service

After an organization has implemented an initial set of content-protection goals, the next step is to create an overall project plan with clearly delineated benchmarks and steps to reach these goals. The plan should drive the team beyond a proof of concept or initial implementation toward a complete operational solution that is fully integrated with the
day-to-day business operations at all appropriate points in the organization. This effort requires mapping content-protection policies into guidelines that will determine how to handle the myriad content-protection situations that may arise.

Start at the Root of the Problem

After an organization develops an understanding of its sensitive content according to business and policy priorities, and develops a basic understanding of how that content is stored and where it travels on the network, the next logical step is to create an inventory of this content stored across the network. Starting with content discovery enables the organization to understand the magnitude of the sprawl of sensitive content that has accumulated over time. This effort aids greatly in estimating and focusing subsequent efforts.

It is also wise to approach a content-inventory activity with a narrow initial scope that scans for a single class of content or a limited number of classes—for example, high-impact personal information or content regulated by the payment card industry. Limiting the class of content for discovery initially enables IT and compliance executives to keep tighter control over discovery and remediation.

Use Cross-Functional Teams

One of the most important aspects of a successful content security strategy is to obtain the involvement of the key business team members from across the organization. Different employees handle sensitive content for different purposes and in different ways. The flow of content across the company varies from one business process to the next. Predicting where content might ultimately go inside or outside the network can be difficult. An organization therefore needs the support of staff from all departments—for example, IT, privacy/compliance, human resources, legal, marketing/communications, and business operations—to act on policies and remediate any incidents that are discovered.

Promote a Culture of Content Protection and Awareness

Technology and policies alone will not protect an organization. The organization must continuously evangelize the importance of protecting sensitive content, and it must provide training on the appropriate ways to share content. Establishing who within the organization has ownership over content is just the first step in promoting an attentive and vigilant culture of content security. Training and ongoing oversight are just as important as the technical safeguards and solutions that the organization implements.

Expand Coverage

After an organization completes the process of implementing content discovery for the highest-priority segment of the sensitive content, it should expand the program. This expansion can include:

  • Implementing additional safeguards, such as network and desktop monitoring of HBI segments
  • Expanding content segments covered, such as MBI and LBI data classifications
  • Implementing a taxonomy that includes sensitivity of documents
  • Implementing data-in-motion scanning
  • Implementing automatic AD RMS protection of HBI content

Benefits

Microsoft IT estimates that the return on investment (ROI) for the HBI project is as high as 600 percent since the project's implementation. The automated solution has significantly reduced the number of operators required to conduct manual search and notification efforts, and it performs a far more comprehensive analysis of all digital information assets. In fact, Microsoft IT estimates that manual scanning reached less than 1 percent of all these assets over the course of a year, whereas the initial automated comprehensive scan of huge volumes of shares and sites finished in just 14 days.

The solution has allowed Microsoft to reduce the risk of inadvertent loss of HBI
information by:

  • Detecting sensitive information and ensuring that all the users of that information are aware of the sensitivity of the information and how to safeguard it
  • Automatically encrypting the information according to the Microsoft IT data-handling standard using AD RMS protection without any user action

Conclusion

Adequate safeguarding of HBI information in large organizations is a critical but daunting undertaking. The large volume of information at Microsoft made manual methods of identifying and classifying HBI information a challenging task. To streamline the effort, Microsoft IT developed a comprehensive approach that includes clear articulation and enforcement of IT governance, thorough engagement of business owners to prioritize risks, and service-oriented operational processes.

By using Microsoft technologies and the third-party RSA DLP Datacenter application, Microsoft IT implemented automated discovery scanning and remediation methods for HBI information. These methods can examine and classify enormous volumes of information in a short period of time. This solution has resulted in significant ROI, increased compliance with data-handling standards, and a reduction in the overall risk associated with the loss of sensitive information.

For More Information

For more information about Microsoft products or services, call the Microsoft Sales Information Center at (800) 426-9400. In Canada, call the Microsoft Canada information Centre at (800) 933-4750. Outside the 50 United States and Canada, please contact your local Microsoft subsidiary. To access information via the World Wide Web, go to:

http://www.microsoft.com

http://www.microsoft.com/technet/itshowcase

© 2009 Microsoft Corporation. All rights reserved.

This document is for informational purposes only. MICROSOFT MAKES NO WARRANTIES, EXPRESS OR IMPLIED, IN THIS SUMMARY. Microsoft, Active Directory, SharePoint, SQL Server, Windows, and Windows Server are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries. The names of actual companies and products mentioned herein may be the trademarks of their respective owners.

Fanden Sie dies hilfreich?
(1500 verbleibende Zeichen)
Vielen Dank für Ihr Feedback.
Anzeigen:
© 2014 Microsoft