Information Security at Microsoft Overview
Technical White Paper
Published: June 2006 | Updated: November 19, 2007
Technical White Paper, 3.33 MB, Microsoft Word file
PowerPoint Presentation, 27.0 MB, Microsoft PowerPoint file
Microsoft IT Vision
Microsoft IT Mission
Microsoft IT Pillars
Microsoft Information Security Vision
Enable people and businesses throughout the world to realize their full potential.
When people are enabled and inspired by innovative IT, anything is possible.
Deliver value by providing innovative and reliable information technology solutions that seamlessly integrate with and improve how people work.
Be a competitive advantage for the company
|Microsoft IT Operational Environment|
|Information Security Framework|
|Current Security Architecture|
The purpose of this white paper is to share the Microsoft strategy for information security. Microsoft Information Technology (Microsoft IT) provides global IT and information security services for Microsoft. This paper focuses on how the Information Security organization within Microsoft IT helps Microsoft protect its digital assets. The goal of this paper is to offer the experience and perspective of Microsoft IT to Microsoft customers who want to improve security in their own IT environments and protection of digital assets.
Through the Trustworthy Computing initiative launched by Bill Gates in 2001, security has become integral to the corporate culture at Microsoft. As part of the initiative, Microsoft created an Information Security Framework based on people (security leadership and culture), processes (risk-based decision making), and technology (the Defense in Depth strategy) to address information security.
Security leadership and culture are foundational to the framework; all employees, business partners, and suppliers must be aware of the role they play in protecting Microsoft digital assets and intellectual property. Microsoft business owners employ risk-based decision making, in partnership with Microsoft Information Security, to determine the appropriate security controls to help protect their digital assets. The Information Security organization works with the business owners so that they can make informed decisions about information security risk. The Defense in Depth strategy applies a comprehensive set of security controls, often features of Microsoft products implemented at appropriate levels based on the value and vulnerability of the assets that the controls are designed to help protect.
Although this paper does not address other areas of corporate security, such as physical security, some technology-based controls are converging to play a role in both information and physical security.
Microsoft security solutions are based on the Microsoft software and technology platform, which has evolved to become one of the most secure platforms in the industry. However, any technical solution can succeed only in concert with the people and process elements, like those included in the Information Security Framework. All of the technologies and processes continue to mature based on evolving security requirements, future strategic plans, product testing, and validation requirements.
This paper is intended for a wide range of readers, including those who are accountable for information security functions. It is also relevant for those who work closely with or in an information security organization, including business and technology decision makers. Deep knowledge of specific technologies is not a prerequisite to understand the security strategies that this paper describes.
The Microsoft IT group is responsible for managing IT services while supporting the highly dynamic computing environment at Microsoft.
Microsoft Information Security supports the efforts of Microsoft IT by managing security risks to an acceptable level.
Microsoft IT operates an open, operational, and experimental development network that meets the needs of technology-literate developers who create core products. This dynamic environment also supports the global corporate enterprise. Microsoft IT is responsible for managing IT services and a challenging computing environment for more than 140,000 end users and more than 500,000 network devices. These devices include tens of thousands of Smartphones that span more than 440 sites worldwide. More than 300 of the sites are sales and marketing offices distributed in major worldwide cities. IT-managed infrastructure exists at more than 200 of those sites. In addition to corporate locations, approximately 9.5 million remote connections per month are made to the Microsoft network, with 800,000 of those being full virtual private network (VPN) access, many from potentially nonsecure networks.
Microsoft IT consists of more than 3,500 staff members who are responsible for managing the IT utility for the company. In addition, Microsoft IT plays a key role in helping the company meet its main business objective of software development and marketing. Microsoft IT serves as an early adopter of Microsoft software, such as Windows Vista®, Windows Server®2008, the 2007 Microsoft® Office system, and the Microsoft System Center2007 family of enterprise management products.
Microsoft is a highly dynamic environment with continual growth and early deployment of new technology. This environment houses more than 1,600 line-of-business applications that Microsoft IT supports. These applications range from a single SAP R/3 instance used globally to specialized departmental and workgroup applications for research, product support, and product development. Microsoft IT also provides operations and security for e-mail, a mission-critical application at Microsoft. Approximately 13 million e-mail messages a day flow to and from the Internet, and 3 million e-mail messages a day circulate internally. Despite this high volume, the e-mail system achieves 99.99 percent availability.
Microsoft IT is constantly developing mechanisms to understand, communicate, and prioritize existing and emerging security challenges that surround the enterprise. This combination of factors—an evolving security landscape full of potential vulnerabilities in a large and dynamic IT environment—presents a challenging array of variables for a security organization to comprehend, organize, and address.
The vision of the Information Security organization to be a competitive advantage for Microsoft has three elements:
- Microsoft Information Security understands and manages the company's information security risk.
- Business leaders are enabled to manage their information security risk and make informed decisions.
- What Microsoft Information Security does strengthens the company's reputation.
Microsoft business leaders are ultimately responsible for the security of their digital assets and managing their information security risk. However, Microsoft Information Security also has obligations. Microsoft Information Security must empower the business units in Microsoft to make informed risk decisions. It also must deliver a security-enhanced infrastructure on which Microsoft operates.
Risk-based decision-making allows Microsoft Information Security to align resources with the company's strategy. Microsoft Information Security allocates resources based on risk. Risk can be determined only in partnership with the business. Microsoft Information Security plays an important role in providing the business with the information to accept, mitigate, or transfer risk.
Microsoft Information Security mitigates risk by applying appropriate security controls and solutions through a Defense in Depth strategy. Microsoft Information Security delivers security solutions that balance security, client satisfaction, and cost. Investments in security must be commensurate with the value of the assets they are protecting, and Microsoft Information Security maintains a solutions roadmap to guide these investments and align with IT and the business.
As Microsoft Information Security works toward its vision, it must address short-term, medium-term, and long-term needs. Microsoft Information Security manages today's risks, maintains a program to implement the next generation of solutions, and continually refines the security architecture to prepare for the future.
The drivers for information security at Microsoft are similar to those that the rest of the industry faces. Microsoft Information Security works with the business to manage risk to an acceptable level. The following key drivers heavily influence this collaboration:
- Business. Information Security's business drivers reflect the Microsoft global business model and relationship with customers, partners, and suppliers.
- Regulations. Because of regulations, statutes, and industry mandates, the Information Security and Privacy teams remain in close alignment.
- Technology. Mobile devices and collaborative tools support a decentralized workforce and are changing the risk landscape. One of the unique drivers for the Information Security team is its role in being the "first and best customer" of Microsoft through early adoption of new Microsoft products.
More than ever, computing is about connecting on a global scale. This shift brings additional concerns for Microsoft Information Security. The team primarily focuses on five areas of concern:
- Regulatory and statutory compliance. Like most enterprises, Microsoft must comply with a large number of regulations and statutes that cover areas such as the integrity of financial reporting and privacy. Microsoft Information Security begins by designing a standards-based control set. Requirements then map to this control set, applying the most stringent requirements wherever possible. This mapping allows overlapping requirements to be met efficiently, along with a focus on unique requirements. For example, password requirements may vary by region. However, by globally enforcing the most stringent requirements, Microsoft IT meets its need to maintain a single, standard password policy.
- Mobility of data. Mobility of data presents its own set of concerns. High-capacity universal serial bus (USB) drives, smartphones, digital music players, and related mobile computing technologies all represent opportunities for data to move outside traditional corporate boundaries. Microsoft Information Security must provide security for data in transit, at rest, and in use, by employing technologies such as Information Rights Management (IRM) to help protect data.
- Unauthorized access to data. Ideally, individuals have access to the minimal set of data necessary to perform their jobs. The reality is that controlling access to large amounts of structured and unstructured data is a substantial challenge. Accumulation of rights can become an issue as employees move within an organization. Trusted insiders such as employees may be the most significant threat to protecting data.
- Malicious software. Malicious software is an ever-present threat, and the trend to target malicious software to specific companies and users has kept this a top priority for the Information Security team. By some measurements, the Microsoft network is the largest private-sector cyber attack target in the world.
- Support for an evolving client. To support an evolving client, Microsoft IT and Information Security continually test early builds of the company's own products. Unlike the other concerns, this area is relatively unique to Microsoft. It provides both challenges and benefits. For example, the pre-release or alpha versions of a Windows® desktop operating system may run internally before antivirus software is available, while advancements in security features provide a more secure client computing experience overall.
Information security at Microsoft is coordinated across the following stakeholder teams:
- Microsoft Information Security. Manages many of the traditional information security functions such as risk management, policy, and compliance. The Information Security team represents approximately 4 percent of the overall Microsoft IT organization.
- Trustworthy Computing (TwC). Is responsible for forensics, investigations, and network monitoring. TwC is a primary connection point between Information Security and the product group. TwC is also responsible for the Microsoft Trustworthy Computing initiative. This initiative includes working with business groups throughout the company to ensure that their products and services uphold Microsoft security and privacy policies, controls, and best practices. The TwC group also collaborates with the rest of the computer industry and the government to increase public awareness, education, and other safeguards.
- Online Services. Covers information security for properties such as Windows Live™ Hotmail® and MSN® and is organizationally separate to meet the requirements of the online service model.
- Microsoft business groups. Have individuals responsible for security functions within their business.
All these teams, regardless of reporting structure, work closely together and participate in various forums to share best practices.
Microsoft is a metrics-based organization. The highest-level security metrics are captured in a business scorecard (illustrated in Figure 1) that is used to indicate which businesses are engaged and the number of current security related issues that require visibility. This simple scorecard is updated quarterly and is used to drive targeted discussions with leadership.
Figure 1. Business scorecard
Each business group has a more detailed scorecard to provide further insight into its information security risk state.
The business owner is ultimately accountable for defining acceptable risk and provides guidance to the Information Security team in terms of ranking risks to the business. The Information Security team assesses risk and defines functional requirements to mitigate risk to an acceptable level. The Information Security team then collaborates with the IT groups that own mitigation selection, implementation, and operations.
Figure 2 illustrates the risk-based decision-making process at Microsoft.
Figure 2. Risk-based decision-making process
The risk-based decision-making process is supported by the Information Security Process, which is the Information Security team's means of measuring control effectiveness. This usually occurs in the form of reports to executive management. Independent third parties often participate in the reviews to determine the effectiveness of specific security controls. The Information Security Process:
- Is a consistent and repeatable approach to measuring the state of the company's information security posture.
- Provides operational guidance for the Information Security team to enable businesses with support information that addresses security risks in their environments and helps them make security decisions.
- Provides a life-cycle standard from planning through implementation.
- Is one method for measuring objectives against safeguards and evaluating Information Security's level of success in providing due care.
- Helps prioritize efforts and direct resources to areas of highest risk and highest need.
By applying this standardized approach, security subject-matter experts are integral to architecture and design. This promotes a consistent approach to fulfilling due care in every service that is provided. This standardized methodology reduces cost of ownership by creating efficiencies throughout every phase of the security service life cycle.
Figure 3 illustrates the phases of the Information Security process.
Figure 3: Information Security Process phases
The Information Security Process phases include:
- Scope. This phase consists of defining specific focus areas within the entire risk universe.
- Assess and validate. During the first step of this phase—which is basically problem definition—the Information Security team identifies and analyzes prospective program events with respect to probability and impact. The results from the assessment form the basis for risk management actions. The second step of this phase strives to validate that the identified controls are in place, that they are designed well and are working to mitigate or transfer risks. The primary output of this step is a prioritized risk list.
- Prioritize. In this phase, relative prioritization of actions to mitigate risk occurs. The main objective of this phase is to analyze the data so that next steps can be determined and prioritized. The prioritized list of activities are captured in the major output of this phase, a Security Action Plan, which details specific security controls and solutions to be implemented. The Information Security team uses the Security Action Plan to capture and track the delivery of security controls and solutions identified in the plan, by coordinating cross-functional teams and using standard project management methodologies.
- Design. This phase is where conceptual designs, architectures, and solutions are developed to mitigate risks.
- Implement. In this phase, approved solutions and safeguards are deployed into the production environment.
- Evaluate. This phase entails ongoing monitoring of the production environment. Effectiveness of solutions is measured at this step. The major outputs of this step are the business and operational security scorecards.
Microsoft Information Security uses an operational scorecard (shown in Figure 4) to measure compliance from a defense-in-depth perspective and to measure the state of technical security controls. Each level has sub-metrics, and each of these has further detail. For example, the team can drill down from overall host compliance to see specific vulnerabilities by domain. This scorecard is used in regular reviews with the Microsoft IT chief information officer.
Figure 4: Operational scorecard
Much the same way the Trustworthy Computing initiative led to creation of the Information Security Framework, the Microsoft Infrastructure Optimization (IO) initiative provides the transformational model for the Information Security team to implement continuous improvement. This model is known as the IO Model.
IO provides a logical roadmap to progress from reactive to proactive IT service management. The vision of IO is to help customers realize the value of their investments in IT infrastructure, to make the IT infrastructure a strategic asset that enables agility within their organizations, and ultimately to help customers create an infrastructure for a people-ready business.
The IO Model is most often used as a strategic tool that helps to evaluate the maturity level of an organization's core technology infrastructure (management, security, and networking) and determine areas (such as application optimization) in which a company can realize significant reduction in costs and improvement in capabilities to a level appropriate for the business. The IO Model is designed not to focus on the type or manufacturer of technologies, but instead focus on the capabilities outlined for each stage. This model is based on an industry IT Maturity Model that helps establish a common context for the Information Security team to communicate with its internal customers and set clear goals for itself.
The IO Model is a continuum of four levels or phases of progressively higher maturity:
- Basic. The Basic phase has unpredictable costs associated with IT operations and uncoordinated, mostly manual, and reactive processes to manage the IT infrastructure.
- Standardized. The Standardized phase enjoys more efficiency with a managed IT infrastructure, including some automation and a slightly more predictable cost center.
- Rationalized. In a Rationalized phase, a managed and consolidated IT infrastructure is viewed as helping the business, primarily through automated processes, measured value, and known costs.
- Dynamic. In the Dynamic phase, the IT infrastructure includes fully automated management, dynamic resource usage, and business-linked service level agreements (SLAs), and can be a company's market differentiator. Depending on the industry, a Dynamic IT infrastructure can provide a strategic business advantage for companies. For industries where a Dynamic IT infrastructure would not provide a meaningful business impact, a Rationalized or even Standardized infrastructure may be the optimal state.
When applied to security, the IO Model considers the trade-off between total cost of ownership (TCO) at one end of the spectrum and return on security investment (ROSI) or risk optimization at the other.
The criteria to measure maturity are categorized into people, process, and technology, which must be approached in a balanced way to achieve meaningful improvements. Figure 5 shows how the three categories of criteria relate to the four phases of maturity.
Figure 5. Infrastructure Optimization maturity phases
For the same reasons that adequate involvement of the business decision makers is required to determine the appropriate level of individual security controls for digital assets, their involvement in determining the appropriate maturity phase for IT is paramount. After an organization achieves the appropriate maturity phase, constant attention and maintenance are necessary to prevent declining to a previous phase.
Table 1 lists further examples of the phases in the IO Model, in the context of the three categories of criteria.
Table 1. Examples of Phases in IO Model
IT staff taxed by operational challenges.
Users create their own IT solutions.
IT staff trained in best practices such as Microsoft Operations Framework and IT Infrastructure Library (ITIL).
Users expect basic services from IT.
Users have the right tools, availability, and access to information.
IT is viewed as a strategic asset.
IT is a valued partner and enables new business initiatives.
IT processes undefined.
Complexity due to localized processes and decentralization.
Central administration and configuration of security.
Standard desktop images defined but not adopted by all.
SLAs are linked to business objectives.
Clearly defined and enforced images, security, and best practices.
Self-assessing and continuous improvement.
Easy, security-enhanced access to information from anywhere, anytime.
Update status of desktop computers is unknown.
No unified directory for access management.
Multiple directories for authentication.
Limited automated software distribution.
Automated identity and access management systems and processes.
Automated system management.
Self-provisioning and quarantine-capable systems ensure compliance and high availability.
Note: More information about the IO Model is available at http://technet.microsoft.com/en-us/library/bb735172.aspx#EOC.
A typical company in the Basic phase has manual, localized processes, minimal central control, and limited or unenforced IT policies. Other attributes of the Basic phase include:
- Service levels are low, and business drivers are not used to set IT priorities.
- There is a general lack of knowledge regarding the details of the infrastructure that is currently in place or how to improve it.
- The overall health of applications and services is unknown, due to a lack of tools and resources.
- Infrastructure costs are high, largely due to high-touch and time-consuming software deployments and updates.
- Responding to security threats is a reactive process because there are no consistent security policies or management features.
A company in the Standardized phase can be characterized as having a managed infrastructure that introduces operational controls through standards, policies, servers, and resources. The Standardized infrastructure is centrally managed with some automation. IT operations remain primarily reactive, with some proactive processes to reduce short-term costs. Other attributes of this phase include:
- Service levels are better than Basic but not optimal. IT makes decisions on behalf of the business based on its perception of business needs.
- Meeting regulatory requirements is difficult and costly for the IT department, because it is responding to and solving unforeseen technology incidents.
- There is no formalized process for the standardization and testing of applications, and identity management is not fully centralized.
- End users feel that the introduction of IT governance, standards, and procedures impose restrictions on their business flexibility and productivity.
Compared to the Basic phase, the Standardized phase offers more thorough support for rich collaboration tools, improved network uptime, and more continuous access to mission-critical data. As a result, the organization will experience an increase in productivity among employees and IT professionals.
At Microsoft, the initial journey moving from the Basic to the Standardized phase included the following enhancements and additions to the Defense in Depth layers:
- Two-factor authentication. Active Directory® directory service policies that require certificates stored on smart cards for two-factor authentication for all remote, or VPN, connections to Microsoft. Microsoft IT deployed smart cards to help protect user identities from credential reuse and other malicious intent. Smart cards take advantage of Windows technologies, including the Certificate Services feature and Public Key Infrastructure (PKI) security, Microsoft Base Smart Card cryptographic service provider (CSP), and Extensible Authentication Protocol/Transport Layer Security (EAP/TLS).
- Secure Remote User access to the network. Remote access solutions that require only a user name and a password can allow untrustworthy devices to access the corporate network. Microsoft IT implemented a solution that consists of a number of technologies to help secure remote access connections on the corporate network as part of the Secure Remote User framework. The end-to-end remote access solution requires the use of WindowsXP Professional or Windows Vista, and smart cards for two-factor user authentication. Specific security configurations on the remote host are required for users to gain access to the network. Upon logon, remote users are connected to a Microsoft Internet Security and Acceleration (ISA) Server-based quarantine network through Remote Access Quarantine Service (RQS) policies, and the remote systems are scanned for configuration compliance. If the systems do not meet configuration requirements, the users remain in the quarantine network.
- Enforcement of strong passwords. Passwords are the keys to the enterprise
network. Microsoft IT enforces strong password policies for all network users. With
strict policies and attributes in place, passwords are far less vulnerable. Password
- Passwords expire every 70 days.
- Administrator-level passwords are 15 alphanumeric characters in length.
- User passwords are at least eight alphanumeric characters in length.
- Passwords contain uppercase and lowercase characters, digits, and punctuation, and meet other specifications.
- Passwords do not contain slang, dialect, or jargon in any language, or are not based on personal information such as family names.
- New passwords vary significantly from prior passwords.
- Security-enhanced wireless access. An upgrade to the wireless network provided security capabilities such as network segmentation, certificate-based network access, and centralized management and monitoring of network devices.
- Implementation of network intrusion detection systems (NIDS). The Information Security team employed a NIDS platform with real-time event correlation technology to achieve a high level of knowledge about the state of the network, presence of malicious activity, and threat exposure on global and local levels. Prior to this, the response to malicious network attacks was highly reactive and consumed a significant amount of resources and time.
A Rationalized infrastructure generally includes proactive processes, provisioning, and policies that have matured and begun to play a large role in supporting and expanding the business. Most important, the costs involved in managing desktop computers and servers are at their lowest. Other attributes of this phase include:
- The Rationalized infrastructure is a business enabler: security enhanced and well managed, with low complexity and high levels of automation.
- The use of zero-touch deployment helps minimize cost, the time to deploy, and technical challenges. The number of images is minimal, and the process for managing desktop computers is very low touch.
- Rationalized customers have a clear inventory of hardware and software and purchase only those licenses and computers that they need. The IT department's primary challenge is to improve integration across implemented products and take advantage of the total value of those products.
- Security is extremely proactive with strict policies and control, from desktop computer to server to firewall to extranet.
- Compared to the Basic and Standardized phases, IT costs are substantially lower, because efficiencies increase through a centrally managed and monitored desktop environment, and improved security administration reduces the burden on IT resources.
- End-user productivity is significantly increased due to the flexibility provided by mobile options and the ability to collaborate across physical locations and time zones.
Microsoft IT has spent the last two years working toward a Rationalized state and focusing on efficiency through automation in identity and access management, certificate provisioning and renewals, and network vulnerability assessments. In addition, moving from the Standardized to the Rationalized phase included the following enhancements and additions to the Defense in Depth layers:
- Network segmentation. Because Microsoft IT does not manage all computers on the Microsoft internal network, isolating trusted, domain-managed computers from unmanaged computers reduces the risk of network compromise. Deploying Internet Protocol security (IPsec) across the corporate network provides a better understanding of where unmanaged computers exist and why they are in use. When IT assets such as source code, financial data, and employee information require more stringent security policies, IPsec provides the flexibility to isolate computers further through a technique called server isolation. Server isolation enables IT administrators to restrict TCP/IP communications of domain members that are trusted computers. These trusted computers can be configured to allow only incoming connections from other trusted computers.
- Two-factor authentication for elevated access accounts. Extending the same smart-card requirement for remote network access to elevated account access limits the potential of impersonation of system administrators.
- Security event monitoring. Capturing security event logs from network servers is a critical part of effective auditing. To be valuable, the event log auditing must adequately address event collection, aggregation, and storage.
Customers with Dynamic infrastructures are fully aware of the strategic value that their infrastructures provide in helping them run their businesses efficiently and staying ahead of competitors. Processes are fully automated, often incorporated into the technology itself, enabling IT to be aligned and managed according to the business needs. Other attributes of this phase include:
- Costs are fully controlled; there is integration between users and data, desktop computers, and servers; collaboration between users and departments is pervasive; and mobile users have nearly on-site levels of service and capabilities regardless of location.
- The Dynamic infrastructure is a core strategic business asset, optimized for business agility and high service levels. It may have a higher cost profile than the Rationalized state, which is offset by its increased value.
- Company executives view IT as a strategic asset instead of a cost center, enabling an organization to be much more agile and better respond to business needs and competitive challenges. Additional investments in technology yield specific, rapid, measurable benefits for the business.
- The use of self-provisioning software and quarantine-like systems for helping to ensure patch management and compliance with established security policies enables the Dynamic organization to automate processes, thus helping improve reliability, lower costs, and increase service levels.
- New employees can be immediately productive, because the IT department can rapidly and proactively respond to end-user issues, and because of the end-to-end integration, automation, and management of data, desktop computers, and servers.
Moving from the Rationalized to the Dynamic phase will take advantage of the key security features made available when Windows Vista on the desktop computer is combined with Windows Server2008. After these technologies are in place, Microsoft IT can achieve the following enhancements and additions to the Defense in Depth layers:
- Network Access Protection (NAP). Much like the RQS policies and configuration compliance scanning tools that are used to enforce security requirements for remote users, NAP will provide quarantine capability for hosts connected directly to the internal corporate network.
- Strong user authentication. Another security control currently implemented for remote users will be extended to users connected directly to the internal corporate network. Active Directory policies that require certificates stored on smart cards for two-factor authentication for all users will minimize the risk of unauthorized access to the network.
- User Account Control (UAC). UAC provides additional security for users who require administrative access at times. With UAC, users cannot run with their full administrator access token without giving explicit consent. By default, all users, even those with full administrative access, run as if their account has standard user access. When Windows Vista determines that a particular action requires the full administrator access token, the UAC service alerts the user that the action requires consent to run with the user's full access token.
- Windows BitLocker™ Drive Encryption. Windows BitLocker Drive Encryption is a new technology included in Windows Vista Enterprise that helps prevent sensitive data and intellectual property from being accessed when a computer is lost, stolen, or decommissioned. Windows BitLocker Drive Encryption uses hardware-based full-volume data encryption technology that requires a user to enter a personal identification number (PIN) or provide a startup key to start a computer.
Five key messages capture the main considerations for the continuous IO-based transformational program for Microsoft Information Security:
- Information Security must partner with the business. Driving business accountability and partnership with Information Security is paramount to a successful business relationship.
- Security controls should be based on risk management. According to industry standards such as ISO 17799, a risk-based security control structure is required for effective risk management.
- Defense in Depth is fundamental. Defense in Depth is a key principle to all security philosophies.
- Standards-based approach helps mutual understanding. Using a common language across all industries help provide a consistent way to explain security controls to internal and external parties.
- Changing business and technology requires flexibility. A balance must always be maintained between business and security needs.
The Information Security team uses a Defense in Depth posture for its security architecture, and controls continue to evolve to meet business needs. The controls used in six of the categories within the current Information Security architecture are as follows:
- People and process
- Security strategy planning
- Information security governance
- Information security policies
- Training and awareness
- Security for partner and extranet connections
- Security for remote access
- E-mail hygiene and trustworthy messaging
- Hardening of the wireless network
- Security event collection
- Host-based segmentation
- Combating malicious software
- Security for mobile devices
- Security Development Life Cycle for IT (SDL-IT)
- Management of source code
- Enterprise digital rights management (DRM)
- Strong passwords
- PKI services
People and Process
Although Defense in Depth relies heavily on technology solutions for enforcement, an organization first needs to determine what should be enforced. Equally important is to identify who makes those decisions and how they are made, communicated, and enforced.
ARCHimedes is the Information Security planning group tasked with development of security strategy for Microsoft IT. The team consists of security architects, program management, and technical writing staff who report to the information Security general manager. As of June 2007, the team comprised five architects, one program manager, and one technical writer.
The primary output of ARCHimedes is security strategy packages, which are detailed documents that outline a strategy via near-term, mid-term, and long-term security architectures. The team works on an approximate five-year time horizon and delivers one package about every five months. Each package strives to:
- Outline a strategy to move IT from its current state to the long-term architecture.
- Answer topic-specific concerns as gathered from stakeholders.
- Deliver tangible requirements and action items (for Microsoft IT and product teams) that are necessary for the realization of these strategies.
Several teams in Microsoft IT are involved in creating deployment architectures. The architectures that ARCHimedes delivers are created solely to provide a reasonable structure for articulating security requirements. Ultimately, security requirements and associated time frames are the key deliverable components of the security strategy packages.
The primary focus of security strategy packages is on protection of Microsoft digital assets. Packages were created, taking into strong consideration all observed business-enabling trends such as outsourcing, mobility, ubiquitous connectivity, and decreased ability to rely on traditional network perimeters.
Separately, team members devoted ample time to research, meeting with stakeholders, analysis, developing content, and reviewing content. The team met collectively for approximately 10 to 15 hours per week to develop the strategy.
Figure 6 shows the phases of individual package development.
Figure 6. Development process for a security strategy package
The development process for a security strategy package includes:
- Define initial scope.
- Engage stakeholders to gather requirements and key concerns.
- Conduct additional research relevant to the particular topic (industry contacts, white papers, Microsoft internal product architects and strategists, etc.).
- Develop architectures (near-term, mid-term, and long-term).
- Determine security requirements based on architectures.
- Validate architectures and strategy against a set of quality controls:
- Key concerns
- Core tenets
- Alignment with product strategies and other strategies
- Stakeholder needs and requirements
- Security risks
- Publish and distribute security strategy package.
- Distribute action items and requirements to appropriate Microsoft IT engineering, architecture, and policy groups. This step drives movement toward reaching the vision.
Security Strategy Planning
Evolving issues and the changing needs of Microsoft IT pose a unique challenge in planning security strategies. In response, Microsoft IT has developed a program that strives to consistently, comprehensively, and authoritatively deliver a long-term strategic plan. The issues that Microsoft IT considered during the development of the program include:
- Long-term security strategic planning was needed to supplement existing short-term security planning activities.
- Product groups appealed for predictable security requirements.
- Regulatory requirements concerning all enterprises, including Microsoft, have compelled Microsoft IT to take proactive steps to help improve the security of the IT utility.
- Frameworks such as the Microsoft Information Security Program (MISP) and Standard of Care (StOC) cited the need for the existence of the program.
Guiding Principles and High-Level Strategy
In most cases, data is the asset that security controls help protect. The evolution of security controls, therefore, means moving the controls closer to the asset itself—the data. With the prevalence of data networks, security controls are focused on edge devices and on network-based monitoring. For example, screening routers and firewall appliances are used to define network segments and to provide access control across segment boundaries. In addition, network intrusion detection agents may be inserted in network paths to do traffic analysis, signature-triggered protection, and anomaly detection. Link encryptors can also encrypt traffic.
In the Host era, security controls are focused on the computer end-devices. For example, the computer may run antivirus software, a personal firewall, and IPsec. The computers themselves encrypt and decrypt data communicated with other computers. Network-based intrusion detection might be replaced by host-based intrusion detection.
In the Data Self-Protection era, security controls are focused on the data itself. For example, access control information is appended to the actual data. Encryption is applied directly on the data.
Enterprise DRM is a technical expression of this concept. It acknowledges and addresses trends that are eroding the effectiveness of network-based controls that often depend on the presumption of an impenetrable network edge. It even acknowledges trends that undermine host-based security, such as portability, where data moves easily to and from a myriad of computing and storage form factors.
In practice, elements from all eras will exist simultaneously. This is true today, and it will be true over the five-year planning period that ARCHimedes works within. Evolution means a shift over time in the dominant control category—a move from a primarily network-centric view of security, to a primarily host-centric view, and ultimately to a primarily data-centric view.
Figure 7 illustrates the guiding principles.
Figure 7. Guiding principles
Relationships with the consumers and stakeholders of Information Security within Microsoft are governed at two levels. The compliance and solution provisioning relationship is a transactional one that via day-to-day interactions implements and builds improved controls within the business and within the IT Infrastructure. The alignment of commercial imperatives and business strategy with the obligations of regulation and industry security practices is governed by the strategic relationship level. This alignment occurs inside the MISP through a shared accountability model. Business leaders are held accountable for driving commercially reasonable security practices inside their business; Information Security guides and measures them in the pursuit of this endeavor. Business leaders provide strategic insight back to the Information Security organization to help mold security strategy and architecture to be compatible with current and future business objectives and models.
This dual-level approach addresses the need for business enablement with the demand of regulation and security, along with responsible business practices and procedures.
The importance of security governance issues is increasing due to greater public expectations regarding data handling. Recent and proposed legislation, as well as judicial and departmental rulings, suggest that increased security and protection of data will continue to be mandated by law across all industries.
Value of Information Security Governance
The value of information security governance is threefold:
- It enables the conduct of private business in a public world.
- It enhances shareholder value.
- It helps to shield Microsoft from liability.
Enabling the Conduct of Private Business in a Public World
The conduct of private business in a public world is enabled by fostering the protection of privacy and trade secrets of Microsoft and of those with whom Microsoft conducts business and is key for a successful governance program. Governance is more than policing compliance; it helps guide everyday business functions. The scope of security governance is wide, and it pertains to the conduct of business with customers, suppliers, consumers, enterprises, employees, and workers' councils.
Governance encompasses the protection of trade secrets and digital assets owned by Microsoft, its business partners, and suppliers, and it includes striving to safeguard the privacy of information through data protections. This includes the privacy of Microsoft customers and employees as well as private information entrusted to Microsoft by its business partners.
Enhancing Shareholder Value
The implementation of internal and external programs to prevent the shrinkage of digital assets enhances shareholder value. Shrinkage in this context means the reduction of the value of digital assets—their business value and their contribution to the bottom line—due to untimely or unauthorized exposure. Both theft with criminal intent and accidental exposure can cause this shrinkage. In addition, once a digital asset has been exposed, it may be difficult to recover and protect in the future.
Helping to Shield Microsoft from Liability
Corporate entities in the global arena are subject to various regulations, and executives may be held liable in certain cases when these regulations are not upheld. Information security governance helps shield Microsoft from civil and criminal proceedings, allowing management to concentrate on running the daily business. A climate of organizational due care supports the implementation of adequate controls. Microsoft must reasonably fulfill fiduciary and regulatory obligations in a manner that can be documented, tested, and attested to.
Microsoft Information Security Program
The purpose of the MISP is to enable and guide a consistent, company-wide, and effective risk-based information security plan to help protect the confidentiality, integrity, and availability of information located on Microsoft systems and information assets. The MISP policy sets forth the organizational accountabilities and principles that require Microsoft to operate a security program. Additionally, the policy establishes the framework for a risk-based and policy-based approach across the enterprise to help protect Microsoft information assets.
An effective information security program will demonstrate reasonable care through sustaining processes and procedures across seven areas: governance and oversight, policies and procedures, training and awareness, enforcement, auditing and monitoring, investigation and remediation, and reporting. Due care is predicated within United States federal case law and guidelines and is a key example of how and why governance efforts are important in today's business arena. These ongoing processes and procedures are vital in demonstrating the intent to comply and provide a basis for judging key performance indicators.
The Safeguard Taxonomy within the MISP framework defines security criteria. In an assessment of the maturity of information security services, this taxonomy helps evaluate the pervasiveness of security controls by services and safeguards. These safeguards are based on various internal and external sources, such as the International Organization for Standardization (for example, ISO/IEC 17799:2000(E)) and the Federal Trade Commission.
- Business continuity. Controls and safeguards designed to help protect the continued operations of key business functions according to their value.
- Communications security. Controls and safeguards designed to help protect the confidentiality, integrity, and availability of information when in transit. Reference: ISO/IEC 17799:2000(E).
- Compliance. The act of adhering to internal and external requirements for the Microsoft systems and information. Reference: ISO/IEC 17799:2000(E).
- Digital asset classification and control. The act of assigning value to information according to standard criteria and the policy-driven framework that mandates and recommends safeguards that help protect a particular class of information.
- Operations management. The oversight of processes and procedures that support the delivery and functioning of Microsoft systems and information. Reference: ISO/IEC 17799:2000(E).
- Organizational security. The structure through which individuals and groups cooperate systematically in the implementation of security for Microsoft systems and information. This structure includes the allocation of roles and responsibilities to individuals and groups, in addition to rules that guide the interactions between these individuals and groups. Reference: ISO/IEC 17799:2000(E).
- Personnel security. Controls and safeguards designed to mitigate risks of human error, theft, fraud, or misuse of Microsoft systems and information introduced by Microsoft personnel through the careful screening of personnel and the institution of personnel training and awareness programs. Reference: ISO/IEC 17799:2000(E).
- Physical asset control. Controls and safeguards designed to help maintain the confidentiality, integrity, and availability of Microsoft systems and information facilities and the assets contained within. Reference: ISO/IEC 17799:2000(E).
- System development and maintenance. The use of a repeatable methodology for the planning, development, testing, deployment, operation, and modification of an information system. Reference: ISO/IEC 17799:2000(E).
Standard of Care
The StOC is how Microsoft Information Security demonstrates due care. Control objectives, standards, and tasks make up the StOC Library. The control structure maps to industry framework and standards to ensure that the StOC Library addresses each of the industry framework and standards—and to provide evidence of compliance with each.
The StOC Library is Microsoft Information Security's view of control objectives. The Control Objectives for Information and Related Technology (COBIT) standard spans across all of IT, and the ITIL is a framework of best practices for delivering high-quality IT services. Neither set of controls fully applies to information security, whereas ISO 17799 focuses primarily on information security. Therefore, the StOC Library incorporates ISO 17799 along with additional controls from CobIT, ITIL, and industry best practices.
Microsoft uses a layered approach to information security policies. This approach evolved out of business needs that remain fueled by an appreciation of the value of information assets, an evolving threat landscape with intentional and unintentional breaches and loss of information, and technology advancements enacted within and outside the company. The information security policies at Microsoft include:
- MISP Polic. This policy establishes accountabilities that require Microsoft to operate a security program. It also establishes a framework for a risk-based and policy-based approach to protecting assets.
- Information Security Policy. This policy contains principles for protecting and properly using corporate resources. It supports specific security standards, operating procedures, and guidelines for business units.
- Information Security Standards. This policy provides requirements and prescriptive guidance that enable users to comply with the Information Security Policy.
Security Policy Evolution
By fiscal year 2005, a set of 22 policies provided a guidance, operational procedures, requirements, and principles. In fiscal year 2006, the Information Security team undertook a project to revise those policies. The project focused on three key areas:
- Content management
In conjunction with regulatory initiatives, the Information Security team undertook an evolution to a tiered framework. This framework established the MISP at tier 1, which asserts the need for security policies. At tier 2, the MISP delivers the guiding principles for all use of corporate assets and that may be applied to clarify intent of standards or resolve complex situations. At Tier 3, standards deliver the requirements and controls, including some prescriptive guidance.
Other highlights of the policy-revision project were as follows:
- Consolidation of 22 policies into one policy and five standards. Simplifying and clarifying high-level security principles and supporting requirements.
- Publication of the MISP as a corporate policy. Endorsing Information Security's company-wide mission of protecting Microsoft information assets.
- Policy usability survey deployed. Providing a window into the user experience and the first tangible metrics for security policy.
- Launch of updated security policy Web site. Offering a more streamlined organization via an expandable table of contents.
Business Drivers for Information Security Policies
Information security policies demonstrate company values and drive desired behaviors. They help ensure regulatory compliance and alignment with industry standards, and they are useful for conducting internal audits. Audits help ensure that company procedures support policies and that employees are following the procedures; they also help measure the overall security health of the organization. Without policies to govern the corporate infrastructure, the potential for loss of intellectual property, personally identifiable information (PII), and customer data increases dramatically.
An organization should minimize exceptions or extensions for policy compliance because they introduce a degree of risk that is not otherwise present. When extenuating circumstances arise, exceptions may apply. These exceptions should be in place only for a temporary and defined period.
Figure 8 illustrates how Microsoft manages risk by using security policies that drive behavior, support values, and limit exceptions.
Figure 8. How Microsoft uses information security policies
Exceptions to information security standards and operating procedures can be approved through the Security Design Review process, which requires that:
- There are no alternative solutions.
- The appropriate stakeholders and executive management agree to accept the risk associated with granting the exception or extension. In addition, they must implement any mitgation recommendations for the duration of the exception or extension.
- All exceptions are logged in the exception-tracking tool and may be revoked at any time if the risk is determined to be too great.
The general manager of Information Security has the authority to deny or revoke any exception if it puts corporate resources or information at an unacceptable risk level.
Policy Management Challenges
Microsoft IT must enforce all of its unique security policies consistently to maintain credibility. Involving key executives during the authorization process lends influence and credence. Implementing a repeatable process ensures that the appropriate roles, responsibilities, and administrative controls are in place.
Terminology and Clarity
Policies must be clear and comprehensive, yet flexible enough to accommodate a wide range of data, activities, and resources. The following statements clarify the difference between policies and standards:
- Policies are high-level statements that define what is to be protected, but they do not provide details about processes or technology.
- Standards include technology-specific mechanisms and solutions that are required for compliance. Standards define how settings should be configured and detail which settings are required.
- Standard operating procedures contain the details about how to comply with the policies and standards.
Ensuring that policies do not include standards and best practices can be challenging. Policy language must be simple and concise so that it is accessible for those who speak English as a second language.
If users are not aware of policies and standards, they cannot be held accountable for compliance. A Usability Review milestone has been incorporated into the Standards Management lifecycle to help ensure a focus on delivering requirements that are easy for users to find and understand. A scenario-based view of these requirements enables users to glean only those requirements that correspond to common usage scenarios. In the future, corresponding scenario-based awareness tactics will help users learn about these requirements. Additionally, all employees attend an orientation that introduces information security policies and standards and explains where to find them. An awareness team provides online training and monthly policy communications.
Monitoring and Enforcement
A comprehensive security scorecard is being developed to help the businesses understand their progress against mitigating security risks.
Strong security is a business enabler that helps improve productivity and protect assets. For any organization, awareness is one of the primary tools used to achieve strong security levels. However, raising security awareness among staff continues to be a key challenge. The 2004 Global Information Security Survey by Ernst & Young questioned security managers from around 1,200 organizations, across 51 countries. The survey found that 70 percent of respondents failed to list security awareness as a top priority.
Note: The 2004 Global Information Security Survey can be found at: www.ey.com/global/download.nsf/International/2004_Global_Information_Security_Survey/$file/2004_Global_Information_Security_Survey_2004.pdf.
Successful security efforts must address three key elements: people, processes, and technology. Developing clear and effective policies is an integral part of promoting security and compliance. Poor security awareness is one of the primary barriers to achieving strong security levels. Creating an awareness and understanding of those policies is an essential enterprise responsibility.
Security Awareness Program
Because human behavior is an unknown variable, making it the weakest link in the security chain, a security awareness program is required to encourage large numbers of employees to behave consistently with respect to security requirements. The program must inspire individuals to be proactive in preventing security incidents. The program must also be able to reach the constant influx of new employees, vendors, and contingent staff.
Components of the Microsoft IT Security Awareness Program include:
- Creating partnerships. The challenge of security awareness is creating strong and effective partnerships from which all groups can benefit in a collaborative effort. Each group must help the other achieve its individual goals. In these joint efforts, care should be taken to not dilute the message of any one group.
- Mitigating risk.
- Keep security messages fresh and in circulation.
- Target new employees and current staff members.
- Set a goal to have 70 percent of the staff trained at any time.
- Repeat the information to help raise awareness.
- Communication vehicles Multiple communication vehicles should be used to
reach as many people as possible:. These include:
- Required online training
- In-depth learning guides
- Monthly newsletters
- Monthly and bi-monthly electronic magazine articles
- Printed collateral
- Web sites
- Technologies that support Microsoft security awareness. These include:
- Learning management system. Host and record participation in online training sessions.
- Flash. Format interactive online training materials.
- Microsoft Office system. Create in-depth guides by using Microsoft Office PowerPoint®; deliver newsletters and electronic magazines by using Microsoft Office Word and Microsoft Office Outlook®.
- Windows Media® Player (.wmv files). Create security awareness videos for online training, new-employee orientations, and on-demand content from Web sites.
- Windows SharePoint® Services. Create an internal security Web site by using the integrated custom scenario tool.
It is critical for risk mitigation that security awareness efforts are whole-heartedly adopted by the business and integrated into individual standard operating procedures. In an effort to more readily empower the diverse Microsoft user base to comply with security requirements, those requirements are being tailored to correspond to specific, common usage scenarios.
The network is what provides the business with the most productivity advantages while at the same time exposing the business to the greatest number of significant information security vulnerabilities. Extending access to services on the network to partners and remote users provides further potential business advantages and security concerns.
An extranet is an extension of an enterprise intranet that enables remote Internet users to access and exchange intranet data. Extranets enable businesses to increase efficiency with suppliers and employees and, when properly implemented, increase the security of intranet data. Microsoft connects to a variety of business partners through extranet environments that Microsoft IT maintains. Although Microsoft enforces security standards for internal devices, it cannot guarantee that its partners enforce these standards. As a result, there is a risk that third-party devices could be used to exploit vulnerabilities, attack Microsoft, and put intellectual property at risk.
Microsoft has developed the following solutions for providing security for remote access to its networks and third-party extranets:
- Smart cards for administrators. The theft of a domain administrator's credentials can jeopardize the integrity of an entire domain. An effective way to mitigate this risk is to require smart cards on domain controllers and selected high-security server computers located on the extranet. Smart card readers are installed on the domain controllers and the computers, and a flag is set on administrator accounts to require smart cards for interactive logons.
- Vendor network retirement, migration, or remediation. Managing vendor security is challenging because of the number of environments and the architecture of vendor networks. Therefore, most Microsoft partner applications are hosted on the extranet. Vendor connections have been either moved to the extranet or migrated to the Internet. In addition, some vendor connections have been decommissioned so that Microsoft IT manages them. Microsoft IT has the ability to end connections in any Microsoft-owned partner environment at any time.
- Partner account security. Microsoft must protect its assets from external exposure and the delegation of user management tasks on the extranet. As a result, all partner accounts must be owned and managed by Microsoft IT, and they must adhere to all account and password policies applied to the corporate network. In addition, local member server accounts can no longer be created in partner environments.
- Elimination of direct vendor connections. Partners are required to be authenticated at an access control point prior to gaining access to the corporate network. Access is limited to only those systems and network resources that vendors require to conduct business. The access control points are implemented through ISA Server2004 firewalls combined with remote access policy restrictions available through Internet Authentication Service (IAS).
- Future state. The extranet is segmented based on which server needs to communicate with another server on a specific network port. This application segregation is accomplished through the Windows Server2003 Service Pack1 (SP1) firewall, Active Directory security groups, and IPsec.
The Microsoft business goal for providing security for remote access is to facilitate high-security access anywhere, anytime for employees. Some of the current risks involved with delivering this access include:
- Security vulnerabilities and weak configurations pose threats.
- Hackers frequently attack home computers.
- Unauthorized activity with valid credentials is difficult to detect and prevent.
- Unmanaged remote devices are at risk for viruses, worms, and other malicious software.
- Broadband Internet access, which maintains a constant Internet connection, heightens risk exposure.
Since 2001, Microsoft has used smart cards to integrate physical and network access. The company currently has more than 100,000 smart cards in use worldwide. Smart cards function as key cards to manage physical access, such as controlled door locks to access a physical site or between sectors within the site. The combination of smart card and radio frequency identification (RFID) technology helps control the level of network and physical access for all users. This technology also enables cashless transactions.
Smart card technology meets the Microsoft requirement for security-enhanced remote access anywhere, anytime by:
- Providing tamper resistance.
- Requiring physical access through a smart card reader.
- Requiring a PIN.
- Enforcing data security, including the logon certificate's private key and e-mail signing certificate.
- Giving each user the ability to view the contents of his or her smart card, reset the PIN, and renew certificates.
- Taking advantage of Windows2000 Server and Windows Server2003 infrastructure and Windows XP and Windows Vista client operating system technologies.
- Accommodating additional functionality that Microsoft IT requires.
Smart Card Deployment
Microsoft IT limited the number of people authorized to issue, manage, and provide smart card support. It delegated authorized security officers around the world to support the smart card program. For the deployment, Microsoft IT used the existing PKI available through Certificate Services in Windows2000 Server and Windows Server2003. Microsoft IT examined the existing deployment process for badges and asked the OEM to incorporate a smart card chip into the badge to achieve a single badge solution. Smart card security officers ensured that the correct cards were given to the intended users, and users were required to create alphanumeric PINs between five and 15 characters in length and meet other password security requirements.
Delegate Issuance Officers (DIOs) managed pre-built replacement smart cards with unique serial numbers, and then distributed the replacement cards only after acquiring approval from the security team at Microsoft headquarters. Ongoing operational costs include supporting five security officers and two PKI personnel, in addition to hardware costs for supporting PKI and smart card certification authorities.
Microsoft IT learned the following lessons from its smart card deployment:
- Before a smart card deployment, an IT organization should fully understand the project
goals for the organization. An IT organization should ask:
- Why is it being deployed?
- How can it benefit the organization?
- What is the expectation for employing the technology within the next 12 to 24 months?
- Is the staff well trained in PKI technology?
- An IT organization should make special note of any special deployment considerations:
- How will the organization manage deployment exceptions, such as Macintosh computers and personal digital assistants (PDAs)?
- What are the unique enterprise priorities?
- Are these users able to use Remote Access Service (RAS) currently?
- Will a limited number of users who do not have smart cards be allowed to access the network?
- Although Microsoft IT noted some performance issues, particularly the inconvenience that users experience from the initial 30-second logon delay, the benefits of increasing network security far outweigh the impact of the delay.
- Because a domain administrator is able to access domain controllers and high-security server computers, the theft of a domain administrator's credentials can jeopardize the integrity of an entire domain. Microsoft IT has implemented smart cards to manage accounts with elevated permissions, minimizing the ability for accounts to be compromised and improving the audit trail for accounts that have elevated permissions.
- All Secure Multipurpose Internet Mail Extensions (S/MIME) and digital certificates are stored on smart cards. Employees can use any workstation equipped with a smart card reader to send and receive S/MIME-enabled e-mail messages. The CSP infrastructure for smart cards is capable of a one-time export of its private key for the purpose of key archiving.
Future of Two-Factor Authentication
- Signing stock grants.
- Securing financial or human resources (HR) data.
- Eliminating passwords for all domain users.
In addition, smaller or different form factors may provide two-factor authentication for physical and network access in the future. These form factors include:
- Flash drive storage.
- Storage of multiple credentials, such as X.509, Security Assertion Markup Language (SAML)-based InfoCard, encryption, or RFID.
- A super USB Flash drive on a key fob.
- Bluetooth-enabled smart cards or smartphones.
Spam, viruses, and other malicious software sent through e-mail messages threaten to overwhelm the enterprise environment. Microsoft IT addresses these risks by using an evolutionary process that provides a flexible, responsive approach. Microsoft IT employs various methods for filtering spam and viruses at multiple network locations, which provides several layers of protection. This approach minimizes the amount of incoming e-mail allowed past the network perimeter and helps secure all e-mail messages by using IRM and S/MIME.
Scanning for spam at the network perimeter significantly reduces the amount of messaging content that is processed and stored internally. In addition, scanning attachments and removing malicious software before message delivery dramatically reduces the threat exposure. This approach further reduces threats by using additional scans at the client level. Microsoft Exchange Server2003 employs multiple technologies for managing scanning and filtering to simplify the work of messaging administrators.
E-Mail Filtering Process
A series of mechanisms filters more than 20 million e-mail messages submitted to Microsoft IT gateways each day. Each of these mechanisms reduces the amount of spam permitted to pass. During attacks, daily e-mail volumes can double, triple, or even quadruple. However, current layers of defense continue to help protect the messaging environment.
Figure 9 illustrates Microsoft IT's multilayered approach to e-mail filtering.
Figure 9: Multilayered approach to e-mail filtering
Connection filtering entails blocking known untrustworthy IP addresses by directly querying third-party providers of lists of blocked senders, also known as block lists. It also compares the IP address of the connecting server with a list of IP addresses previously denied. Connection filtering blocks the bulk of incoming Simple Mail Transfer Protocol (SMTP) connections.
Sender filtering examines the originating address of incoming e-mail messages and compares it with a list of administrator-configured blocked senders. Sender filtering is useful in mitigating the risks of e-mail attacks. However, sender filtering alone is not an adequate defense against evolving spam techniques.
Recipient filtering rejects messages at the gateway layer based on criteria such as the recipient's address. Microsoft IT uses recipient filtering to block millions of "mailbomb," or bulk spam, e-mail messages addressed to only a few recipients in a single day.
Intelligent Message Filter (IMF) performs heuristics-based message analysis at the Internet gateway and rates messages for filtering. It deletes 38 percent of the messages that remain after sender and recipient filtering. IMF was originally developed for Hotmail to block spam. With Microsoft Exchange Server2007, the Content Filter Agent performs the filtering.
Messaging Best Practices
Microsoft IT has implemented the following best practices for messaging to provide the highest level of defense while still maintaining usability:
- Use a multilayered defense for effective results. A single line of defense is no longer effective for any enterprise environment.
- Scan for spam at the messaging gateway, before scanning all e-mail for viruses. The goal is to process and transport as little spam as possible through the network. Scanning messages that will later be identified as spam is not cost-effective, because those messages will eventually be blocked. An organization should scan incoming and outgoing e-mail for viruses to help keep the messaging environment free of viruses. It is also important that internal users not infect other users by themselves sending viruses.
- Delete rather than clean infected messages. Sending infected messages through the network presents a potential liability. Although anti-virus software can attempt to clean the virus from a message, all messages tagged as infected should be deleted.
- Strip attachments of certain file types. An organization should implement attachment stripping at the e-mail gateway layer and match the attachment-stripping policy with the attachment-blocking policy enforced at the client level.
- Disable security notifications to Internet senders. Microsoft IT believes it is good practice to avoid sending security notifications to Internet senders, in order to prevent malicious senders from identifying specific security actions that they might use to circumvent that security action in the future.
- Block blank senders. E-mail messages from blank senders are often illegitimate.
- Generate security notifications for infected outgoing Internet e-mail messages. If an internal user inadvertently sends an infected message, the user needs to be notified so that he or she can remove any infection from his or her computer.
- Use restricted distribution groups. Restricted distribution groups reduce overall e-mail messaging volumes and mitigate risk.
- Consistently enforce antivirus policies on client systems. Individual users must understand corporate antivirus policies, their role in the process (the actions that they can control—for example, not clicking links, executable files, or attachments from unknown sources), and the level of control that is available.
- Control the network perimeter and routing. An organization should implement as many defensive measures as possible throughout the messaging infrastructure, beginning as close to the Internet as possible.
- Block e-mail messages from certain IP addresses and domain names. IP address-based filtering and sender filtering are quick, effective measures to block malicious messaging traffic and reduce its impact on the infrastructure.
Microsoft Trustworthy Messaging
Microsoft uses IRM and S/MIME to help protect e-mail messages that travel within and outside the corporate network. Microsoft Information Security developed a Digital Asset Classification and Handling Guide to provide guidance in using Windows Rights Management Services (RMS) for Windows Server2003 and S/MIME.
Note: For more information about Trustworthy Messaging at Microsoft, refer to the IT Showcase technical solution brief "Trustworthy Messaging at Microsoft" at http://technet.microsoft.com/en-us/library/bb735271.aspx.
The evolving culture at Microsoft demands security-enhanced access anytime, anywhere. However, the proliferation of wireless devices has led to unique security challenges. Wireless devices introduce new risks, such as vulnerable protocols, potential access for unauthorized users and devices, rogue access points, and denial of service (DoS) challenges. Microsoft mitigates these risks by using a combination of people, processes, policies, and technology.
Figure 10 shows the business driver, potential risks, and solutions for providing security for wireless devices.
Figure 10. The business case for wireless security
Wireless technologies and processes require persistent renovation to remain effective. This renovation includes updating the wireless local area network (WLAN), improving the existing infrastructure to provide necessary access, and mitigating security risks. Newer, more secure protocols such as Wi-Fi Protected Access (WPA) and Wi-Fi Protected Access 2 (WPA2) must be implemented.
Discrete wireless solutions should be tailored to each requirement scenario, including:
- Enabling virtual local area networks (VLANs) and domain isolation.
- Incorporating admission and access control technologies, such as 802.1X, EAP, and Remote Authentication Dial-In User Service (RADIUS).
- Integrating automatic detection of rogue access points by using an air monitor and RF technologies from Aruba Networks.
- Offering security-enhanced wireless access where it has been previously unavailable by using service set identifier (SSID) beaconing.
- Implementing and distributing wireless Group Policy that promotes association with the most secure SSID and supports both WPA and WPA2 privacy.
At Microsoft, future improvements to the security of wireless access include the following:
- Tightening security and access restrictions by using a network access point.
- Enhancing security between RADIUS and access point infrastructure by using IPsec.
- Using Aruba Networks security functions to define individual user or user group access functions, coupled with Aruba Networks firewall policies, to help ensure least privileged access to company assets and resources.
NIDS provide unique security visibility and intelligence-gathering capabilities to the enterprise. The ability of NIDS to monitor and gather critical security information without affecting the monitored network is a key advantage that distinguishes NIDS from other security controls. A NIDS platform is one element of the overall defense-in-depth strategy for detection on the network. Trends in the industry toward active network intrusion prevention, as well as information protection, are under review and analysis for future deployment at Microsoft.
Microsoft Information Security uses the NIDS platform to:
- Assess the current state of the network.
- Develop real-time intelligence about the presence of suspicious and malicious activity.
Decisions for sensor placement are driven by criteria that include the classification of digital assets, risk, process and use models and workflows for digital assets, and the presence of key internal controls.
Client Host-Based Security Controls
Today, many policy controls are implemented on the client host, such as antivirus software, IPsec, antispyware, Group Policy objects, and operating system updates. Host-based measures provide effective mitigation against a multitude of security threats.
In Active Directory service environments, deployment of host-based controls requires host membership in the proper domain—a process that can present a deployment challenge. The fact that unmanaged hosts can be the most problematic client population on a network compounds this challenge. Thus, the least secure portion of a network resists implementation of the controls designed to provide security for it. This portion of a network then becomes the point of origin for recurring incidents, such as viral outbreaks that affect the production networks.
A NIDS platform is not host dependent and can support a variety of applications, protocols, and deployment scenarios, such as:
- Monitoring the network at aggregate locations.
- Varying the sensor visibility between the core, distribution, and access layers of the network infrastructure.
- Enabling organizations to focus intelligence efforts and map detection policies to a risk matrix.
Microsoft Information Security employs the NIDS platform with real-time event correlation technology to achieve a high level of knowledge about the state of the network, the presence of malicious activity, and threat exposure on global and local levels.
Future Directions in Intrusion Detection
After deploying intrusion detection systems, organizations often complain about the large volume of alert data that is generated. This is a known result of the postdeployment process that an organization must endure before realizing a greater return on investment.
An intrusion detection system generates events that must be reviewed, interpreted, and correlated with other threat indicators or attacks present on the network. The knowledge required to perform these tasks is not limited to a particular intrusion detection system, but includes:
- Knowledge of the monitored network and of the applications and services deployed on the network.
- An understanding of attack patterns and threat execution methodologies, vulnerability assessment data for the protected environment, and other functional areas.
Unfortunately, finding this skill set in an analyst in combination with analysis of event logs for an intrusion detection system is rare. Because prior training and experience are only somewhat portable to new environments, hiring and intensive immersion training is only partially effective.
The solution is to automate data analysis and achieve better intelligence from an intrusion detection system's data by combining data from multiple sources. Microsoft Information Security is now using an automated correlation engine, known internally as Security Event Manager (SEM). SEM combines detection data from NIDS, antivirus software, firewall and proxy services, Audit Collection Services (ACS), and vulnerability assessment systems such as Secure Environment Remediation (SER). SEM automates multiple, time-consuming aspects of the intrusion analysis workflow, such as attack pattern recognition and research, that previously relied on manual correlation.
Future deployments of intrusion detection systems will incorporate additional data points and achieve higher levels of detection accuracy and network intelligence. In addition, host intrusion detection systems (HIDS) and host intrusion prevention systems (HIPS) technology will enable focused protection of critical resources and assets in mitigation of assessed risks, adding another layer of defense.
As HIDS and HIPS technology is reintroduced into the Microsoft network environment, its ability to provide per-host or per-location real-time network health and intelligence data will be a key contribution to the network defense and monitoring strategy.
Security event logs are a critical part of effective auditing. To be valuable, the event logs must adequately address event collection, aggregation, and storage.
Note: The limitations described in this section are for the event log prior to Windows Vista. The Windows Vista event log has been rewritten to address these issues.
- Due to a technological limit, the maximum size for all event logs (Application, Security, and System) is 300 megabytes (MB). (For more information, refer to the article "The event log stops logging events before reaching the maximum log size" at http://support.microsoft.com/default.aspx?scid=kb;en-us;312571.) On a busy domain controller, this means that the Security Audit log may overlap very quickly.
- The event log does not have a built-in event-forwarding capability. An enterprise that wants to analyze Security event logs must write its own collection system.
- To access the Security Audit log, the user must be an administrator on the remote computer. Workarounds do exist, but they are not common knowledge, and currently there is no user interface to enable the workaround.
- The event format for a specified event can change between Windows versions (even between levels of service packs for a specified operating system).
- The event log does not store the event text as it is displayed in Event Viewer.
Only the event field values are stored. To display the event in Event Viewer, the
event field values are processed by a message dynamic-link library (DLL) to display
the field names next to the event field values. For example, for EventID 538 (user
logoff), the message DLL string looks like the following:
- User Name: %String01%
- Domain: %String02%
- Logon ID: %String03%
- Logon Type:%String04%
Note: This message DLL string can change format between versions of the operating system.
Windows NT Event Log Issues
The Microsoft Windows NT® Event Log service in Windows Vista and Windows Server 2008 has made a number of improvements in scalability and aggregation. It removes the previous limit of 300 MB on the event log file size and provides a way to aggregate events across multiple systems. Events can be logged locally at a sustained rate of approximately 20,000 events per second. The event logs are aggregated from multiple systems and multiple logs onto a single event collector at a sustained rate of up to approximately 6,000 events per second into a specified event log on the collector. In Windows Vista, the raw event is stored in XML format, so the event field names and event field values are kept in the same place.
Additional code is necessary to aggregate events into a database.
Audit Collection Services
ACS collects, processes, and transfers audit events to the ACS collector, where events are stored in a Microsoft SQL Server™ database. ACS is part of Microsoft System Center Operations Manager2007 and uses a client/server model for event collection, aggregation, and storage.
ACS has been integrated into System Center Operations Manager 2007 as an optional component, and it cannot be installed separately from the Microsoft Operations Manager agent. It provides a consistent schema for events received, regardless of the level of the operating system that sent the event.
Note: For more information about System Center Operations Manager2007 and ACS, refer to "Implementing System Center Operations Manager2007 at Microsoft" at http://technet.microsoft.com/en-us/library/bb735238.aspx.
The forwarder has been running on Microsoft domain controllers, data center servers, source server computers, and developer desktop computers since late 2003. The domain controller, data center, and source server computers are all running Windows Server2003 SP1 or Windows Server2008, but the developer desktop computers vary from Windows2000 through Windows Vista. The forwarder consumes approximately 3 MB of RAM (all buffers are pre-allocated, so this does not change over time) and 5 to 10 percent CPU on startup, but less than 1 percent during runtime operations. Installing and uninstalling does not require a restart. The communication from the forwarder to the collector is encrypted and mutually authenticated, and the forwarder compresses the message to reduce network usage.
All events are sent from the forwarder and filtered at the collector. This allows gap detection in the event stream at the collector, which is recorded in the ACS SQL Server database as a separate event. After the collector receives the event, it reduces the amount of storage required in the database schema by reusing strings already allocated in the database. This string reuse can result in a 20 to 40 percent reduction in database storage requirements versus the original event log size. ACS has a subscriber interface so that the collected event stream can be inspected by a custom-built application for immediate analysis and alerting.
Because the total internal host population is greater than 300,000, storage costs are prohibitive. Microsoft internal deployments focus on three critical populations identified by risk assessment.
All Microsoft IT-managed domain controllers use ACS to collect domain-level changes, including computer and user account management, domain-level group changes, password changes, logon activity, and domain trust changes.
Source Code (Server and Client Computers) and Data Center Server Computers
Source code and data center server computers are critical groups that include local logon events to computers, group changes on computers (elevation of permissions), process accounting, local account password resets and changes, and local account creation.
Although the network provides the means for most security attacks, the host devices on the network are generally the ultimate targets of those attacks. Defense in Depth measures are crucial at the host layer and pose unique challenges with the increasing types of hosts connecting to today's network.
There are two approaches to segmentation in the corporate intranet—physical and logical. Types of logical segmentation include network-based, host-based, and data-based. Host-based segmentation at Microsoft is achieved through IPsec. Where segmentation is concerned with security, the purpose is to help protect the confidentiality and integrity of non-public information assets that reside on hosts that use the corporate intranet.
Host-based segmentation applies to host computers that store, process, use, or transmit data that requires protection from hostile communications. Elements on the host that require protection include system configurations, applications, credentials, and data. A suite of mechanisms collectively known as the Domain Isolation Model has been developed to isolate the host from hostile communications until such communication is validated against a set of security criteria. The Domain Isolation Model describes a collection of hosts that communicates over the corporate intranet in a logically isolated grouping of intended hosts. When protected by the Domain Isolation Model, hosts will not accept malicious incoming packets from anonymous hosts on the network.
The business benefits of host-based segmentation are as follows:
- Decreased risk of network attacks. A significant potential exists for network-based attacks and unauthorized malicious traffic on the perimeter of the corporate intranet. IPsec operates at the network layer, and can therefore apply protection against unauthorized malicious traffic for all present and future applications on a host. Because it operates beneath all applications, it applies protection against such traffic, equally and completely, to all applications on the host. IPsec can also be configured to provide network-level attack defense by blocking unicast incoming or outgoing IP traffic based on source and destination addresses, protocols, TCP and User Datagram Protocol (UDP) ports, and firewall behavior. Although not a full-featured firewall, IPsec provides complex static filtering based on IP addresses, whereas Windows Firewall provides stateful filtering for all addresses on a network interface.
- Credible traffic authentication. IPsec provides traffic-level authentication, which is the logical mitigation for the risk of unauthorized traffic. Current protocols (for example, TCP/IP) and applications are not capable of end-to-end traffic-level authentication. IPsec authentication is accomplished with its negotiation protocol, Internet Key Exchange (IKE).
- Optional traffic encryption. Where segmentation is concerned with security, the purpose is to protect the confidentiality and integrity of non-public information assets residing on hosts that use the corporate intranet. IPsec promotes data confidentiality by enabling the optional Encapsulated Security Payload (ESP) protocol that encrypts the IP payload. In the cases where IPsec encryption is used, the confidentiality of the data flow is protected between the peer network stacks. Not all traffic on the corporate intranet is encrypted, and not all traffic that is encrypted uses IPsec encryption. For integrity, IPsec applies a cryptographic checksum to insure each received packet has not been tampered with in transit.
- Host security state compliance. With the introduction of NAP and Authenticated IP in IPsec (the replacement for the IKE negotiation protocol), IPsec can provide credible assertion of a host's compliance with security state policy—that is, host health.
- Sub-segmentation capability. Host-based logical segmentation also allows for sub-segmentation (referred to as server isolation) and is accomplished through the use of the Access this computer from the network user right. By combining IPsec and Kerberos with this user right, an administrator can restrict access to only specified computers.
The Open System Interconnection (OSI) model describes a networking framework for implementing protocols in seven layers:
- Layer 7, application. Provides application services for file transfers, e-mail, and other network software services.
- Layer 6, presentation. Formats and encrypts data to be sent across a network, eliminating compatibility problems.
- Layer 5, session. Establishes, manages, and ends connections between applications.
- Layer 4, transport. Provides the transparent transfer of data between systems.
- Layer 3, network. Provides switching and routing technology.
- Layer 2, data link. Furnishes transmission protocol knowledge and management, handles errors in the physical layer, and provides flow control and frame synchronization.
- Layer 1, physical. Provides the hardware a means for sending and receiving data.
Six of the OSI layers (all except Layer 1) are the basis for logical segmentation. VLANs, for example, can be used at Layer 2 for segmentation. Examples of Layer 3 and 4 segmentation technologies include firewalls, routers, and IPsec. Internet Information Services (IIS), virtual roots, and hierarchical directory-based systems can be used at Layer 7. Logical segmentation can also be characterized as network based, host based, or data based. Currently, Microsoft uses all three types on the corporate intranet.
Figure 11 illustrates the architecture of the corporate intranet after Microsoft IT logically isolated it by using IPsec.
Figure 11. Architecture of the corporate Intranet after logically isolating it via IPsec
A boundary computer is a domain isolated computer that can accept incoming communications from non-compliant hosts. The Domain Isolation Model is subject to the same constraints as a majority of contemporary segmentation models, in that they are not opaque. Innate to these models is the need to allow some points of access across the segmentation border.
Although most Infrastructure hosts are in the Domain Isolation Model, they are excluded from IPsec policy. Both compliant and non-compliant hosts are permitted to communicate with Infrastructure hosts.
Both network-based segmentation (via screening routers) and host-based segmentation (via IPsec) are concurrently used between hosts in the Domain Isolation Model and hosts in the extranet.
The Domain Isolation Model has been in production since its introduction into the Microsoft corporate intranet in 2003. The technical underpinning of the Domain Isolation Model is IPsec. Microsoft first implemented IPsec in the IP version 4 (IPv4) stack of Windows2000. Although IPsec is well known in the industry, almost all implementations use tunnel mode. IPsec is often used between routers in this mode. As a result, the fact that it was designed to add security to the TCP/IP stack is often overlooked. Microsoft IT implements IPsec on hosts by using transport mode.
A rigid security policy requires a host to deny, by default, all incoming requests for access and respond only to requests from computers that present strong traffic authentication. To accomplish this, Microsoft IT currently provisions IPsec to domain-joined computers (and non-domain-joined computers at the remote end of a RAS connection). IPsec is used between initiating and responding hosts to isolate a community of computers with similar states of health and authorization constraints. Such computers are said to be members of the Domain Isolation Model.
To successfully negotiate a connection to the Domain Isolation Model host, the initiating computer's credential is a Kerberos token or, if Kerberos is not supported, a digital certificate.
The IPsec policy most widely deployed at Microsoft does not prefer encryption; that is, the policy is configured to use ESP-null. However, the policy allows optional encryption, and computers can negotiate and communicate with encryption (ESP) if one of the peers requests it.
An IPsec policy consists of a set of rules. Each rule consists of a filter list and an action, and can specify an authentication method, a tunnel, and a connection point. A filter list contains a set of filters. Each filter specifies a source address or network, a destination address or network, and optionally, a port and protocol. Filter actions include Permit, Block, or a customizable Negotiate Security action.
Many organizations mistakenly rely on one device as a single line of defense against malicious software attacks. However, a layered approach that uses proactive and reactive mechanisms throughout the network is most effective. These mechanisms include:
- Antivirus software. Helps protect a system from known and, in some cases, unknown or newly revised malicious software. It can also be used as a corrective mechanism. Although it should be implemented at potential network choke points, it should not be the only instrument to deter attacks.
- Content scanners or filter scanners. Help prevent user-defined code from entering the organization. As a complement to antivirus software, content scanners are typically packaged as plug-ins for existing gateway products. The scanners are proactive, assisting in the removal of malicious software that is unknown or that antivirus software has not detected. They can also remove a predefined set of executable files, such as e-mail attachments in front Internet Mail Connectors, block access to known malicious sites, or allow access only to business partner sites.
- Intrusion detection software. Assists in removing known network viruses or worms, such as Nimda and Code Red. It can also be helpful in identifying paths of vulnerability and locations where a hacker has been.
The best approach to combating malicious software is using a combination of these mechanisms. An organization should test and evaluate these mechanisms prior to implementation to ensure the best selection for a particular environment. After implementation, continuing to test and evaluate is critical to ensure that the best software and solutions are in use.
New Approaches to Malicious Software
The past few years have revealed new types of malicious software, from blended attacks to spyware, keyboard loggers, hacking tools, and Remote Access Trojans (RATs). Still common are spam or phishing attempts that typically arrive in an e-mail message that contains a URL or an attachment. Social engineering tactics prompt uneducated users to "click and inflict" the payload. New scanning software is available to address this malicious software:
- Behavioral scanners. These scanners typically work on the gateway, but they can also work on the desktop. These scanners run suspicious code in a virtual machine environment and then compare file characteristics to a predefined set of characteristics to determine whether the code is hostile. Behavioral scanners are effective at finding unknown viruses, worms, and Trojan horses (including the detection of Trojan horses that antivirus software does not detect), bots, spyware, hacker tools, and keyboard sniffers. Behavioral scanners yield a higher percentage of false positives, so they may be a better solution for the gateway than for the desktop.
- Spyware scanners. Many of these scanners are designed to work on the gateway and the desktop. Like antivirus software, spyware scanners are reactive and require continued maintenance.
- Trojan scanners. These scanners are more recent and, like spyware and antivirus software, are signature based and reactive in nature.
- Homegrown scanners. These scanners are developed in an organization to target specific malicious software that other scanners do not target.
Automated Vulnerability Scans
One of the methods that Microsoft IT uses to address vulnerability management is active scanning and auditing. Automated vulnerability scans reduce the impact of a malicious attack and minimize the loss of intellectual property. Automated vulnerability scans work with Microsoft technologies, including Microsoft Systems Management Server (SMS), Microsoft SQL Server2005, Windows Server2003, and IIS.
The automated vulnerability scan process involves monitoring and scanning all Microsoft environments. This effort includes monitoring for known vulnerabilities on network devices, hosts, applications, trusts, and accounts, in addition to ensuring compliance with security policies. The major challenges of automated vulnerability scans include the timeliness of software updates and working with unique corporate networks or large networks that span more than 500,000 network host devices.
Timeline for Security Updates
Microsoft IT regularly releases updates to correct vulnerabilities for systems and applications that the Information Security team deems critical. To help ensure compliance, Information Security ensures consistent and timely installation of these updates. It also enforces the application of security updates without end-user or operator intervention and prevents end users from disabling security patch management unless they have an approved exemption.
Microsoft uses a timeline for security updates to chart the update process from the initial report of a problem. This process involves a staged response that ends with problem resolution. It provides a metric for evaluating and improving the response to security issues at Microsoft.
Attacks of malicious source code designed to exploit known software flaws now occur rapidly, within three days of announcements of the flaws. Microsoft continues to accelerate its responses to these attacks.
Microsoft IT has also compressed its emergency update process to two days by using continuous e-mail and IT-Web communications to initiate voluntary patching. IT-Web is an enterprise-wide Web services management solution that Microsoft IT developed. After 48 hours, voluntary and forced patching begin through SMS, which automates software distribution over networks. After risk assessment, the final measure of enforcement is to initiate an automatic port shutdown process seven days after the event. Through this process, non-compliant computers are disconnected from the corporate network.
In today's evolving mobile environment, there is a strong trend toward using mobile devices (feature phones and smartphones) in corporate application workflows traditionally reserved for the mobile computer. This capability requires the mobile information worker to access corporate, partner, or customer data from either side of the corporate firewall by using a wide variety of mobile devices.
To ensure that information assets are handled in accordance with enterprise security policy, the mobile device must be capable of participating in the security control framework that governs protection of data at rest, in transit, or in use. Encryption of the mobile device's file system and removable storage media satisfies the criteria for data at rest, and a mobile enterprise DRM client covers many data-in-transit and data-in-use scenarios.
The primary security concerns for mobile devices in the enterprise stem from the gap in security controls relative to the computer, given the parity of roles of mobile devices and mobile computers. The mobile device must be capable of participating in the enterprise system management infrastructure, for health checks, updates, and policy enforcement. The mobile device must not become a less secure entry point into the corporate intranet, and care must be taken as to what data is available to mobile devices. Loss of possession is a constant vulnerability given the size of the devices, and the enterprise must use multiple safeguards when the devices are used in business workflows that process non-public data.
User impersonation is a primary concern when mobile devices are clients on the corporate intranet and have access to sensitive data. The process of enrolling or importing a user, device authentication, or an encryption key certificate onto a mobile device for the first time must be done in a security-enhanced and impersonation-free manner. The organization can accomplish this by requiring the device enrollment to be done by partnering with a known healthy computer that will proxy the enrollment for the mobile device and can enforce two-factor user authentication.
Considerations for data protection include the following:
- The mobile device must be capable of being provisioned to automatically encrypt high-value corporate data destined for the onboard and removable memory.
- The mobile device must be capable of being provisioned to enforce PIN or password policy on startup and when exiting hibernation.
- The mobile device must be capable of being provisioned to lock the screen after settable inactivity time (typically 5 to 15 minutes).
- The mobile device must be capable of being provisioned to erase internal memory or reset to factory default after a specified number of incorrect password entries (typically five tries) or via remote command from a management console.
Microsoft IT developed and implemented the Trustworthy Computing Security Development Life Cycle for IT (SDL-IT). SDL-IT is used to inventory and assess all line-of-business (LOB) applications or software that needs to withstand malicious attack. Microsoft IT proactively promotes application security and privacy by integrating SDL-IT into the software development life cycle.
SDL-IT assesses the development of LOB applications to identify and resolve security and privacy vulnerabilities. It also enables application risk management on an operational, strategic, and tactical basis. The SDL-IT process has been in use at Microsoft for four years. More than 2,000 applications have run through the SDL-IT process, 900 of which have undergone extensive security design and code reviews.
One of the primary challenges of SDL-IT is ensuring that all application teams follow the SDL-IT process. This process means integrating the data center's approval process for virtual IP addresses with the SDL-IT process. The SDL-IT team must provide approval before the data center can approve virtual IP addresses for any application.
Implementation risks involve identifying any issues that the SDL-IT might miss and mitigating all risks. SDL-IT implementers must follow uniform processes and methodologies when conducting security and privacy reviews.
Figure 12 shows the way the software development life cycle and SDL-IT align to provide security for software.
Figure 12. Alignment of SDL-IT and software development lifecycle
SDL-IT Process Alignment
The SDL-IT process is not optional, and all LOB application teams must complete it before production. Enforcing the SDL-IT process is integral to its success.
All critical development information is added to SDL-IT, including application information, design documents, source code, and server information. The SDL-IT process then identifies and logs bugs in security and privacy databases, exception requests, and scorecards.
SDL-IT is a service offering in which teams that participate in the process are charged for threat modeling review, code review, and other required services. SDL-IT is continuously improving to make the process more cost-effective.
Applications that complete this process may be:
- Internal-facing applications, such as employee support sites like portal sites for human resources and benefits information.
- External sites, like partner portals or online-hosted applications.
- Sites designed to support business operations, such as Helpdesk solutions (incident tracking, credit card processing, and knowledge base stores).
Typical application profiles include vendor invoice processing, product marketing, product support sites, and external customer and partner sites. The level of risk involved with an application determines the level of SDL-IT processing. The levels of risk are as follows:
- High-security-risk applications. These applications require:
- Threat modeling to identify assurance level. The Application Consulting and Engineering (ACE) Security team reviews and signs off on the threat model prior to application development.
- Comprehensive code review in which the application code is reviewed for security vulnerabilities. These vulnerabilities need to be mitigated before the application can go into production.
- Deployment review that assesses the underlying infrastructure on which the application will be deployed, including IIS settings, SQL Server settings, and Windows-based client and server configurations.
- Medium-security-risk applications. These applications require only the code review and the deployment review.
- Low-security-risk applications. The application team self-services these applications. The ACE Security team provides the necessary training through which the application team can assess the security posture of the application. Questions, as they arise, are brought to the ACE Security team.
Several SDL-IT services help customers identify and mitigate security challenges. Three of theses services are:
- Security Code Review service. Helps customers understand and mitigate code-level risks by assessing critical applications for known code-level security vulnerabilities. The Security Code Review service uses both automated code scanning tools to provide a high level of accuracy and relevancy, and manual line-by-line review of code by senior analysts. The service also employs proprietary tools to enhance the quality of code used in LOB applications.
- Threat Modeling service. Gives customers an overview of the threat modeling process and identifies threats during the design phase to build a security strategy that recognizes architectural and design concerns before creation of an application. The service makes threat modeling easy for non-security technology professionals to use and educates users on how the threat model fits into the software development life cycle.
- Secure Application Development Training service. Provides focused information to help developers understand and address the most common security problems. The training service, built on years of hands-on experience, provides an understanding of the threats behind the fundamental security problems.
Benefits of SDL-IT
In the past five years, Microsoft has realized the following benefits:
- Customers have gained access to essential knowledge for identifying and addressing risks in their systems.
- Performing 2,000 code reviews and logging 50,000 bugs.
- Customers' risk of financial loss and hacker access has been significantly reduced.
Microsoft is continuing to improve the SDL-IT process by:
- Closing software vulnerabilities that could allow unauthorized access, and increasing vigilance and methods of mitigation.
- Detecting and identifying application teams that implement code updates without enabling the mandatory virtual IP addresses.
- Developing new tools and methods to scan code, decreasing the time spent conducting application security code reviews.
Microsoft source code is a high-value digital asset. Microsoft has implemented a formal, enterprise-level service for managing source code security. The goal of the service is to reduce the risk that unauthorized users will compromise the integrity and confidentiality of source code. Microsoft IT created this service for each business group at Microsoft and inserted multiple checkpoints to minimize potential damage from unintentional disclosures. This endeavor requires support from the organizations that manage source code and is still working to eliminate redundant infrastructure, access by non-essential personnel, and inconsistent processes.
In the past, various independently managed source code repositories, called Source Depot repositories, existed throughout the network. This structure caused inconsistencies for the independent environments and the access models, disrupting the ability to enforce security. At one point in the past, any computer on the corporate network could access Source Depot server computers at the network layer. This meant that compromising a single computer on the network might have led to penetration of one or more Source Depot server computers.
Microsoft IT restricted the computers that can access Source Depot server computers to those who require access by:
- Creating a separate, security-enhanced network forest.
- Using IPsec technology to authenticate and authorize access to Source Depot server computers.
- Registering computer accounts for developers who need access to Source Depot server computers and adding those computer accounts to security groups that are given controlled access. These user rights assignments are combined with IPsec, Kerberos, and the "access this computer from the network" rules and policies.
- Restricting local administrator account permissions on Source Depot server computers.
- Eliminating services and batch jobs run from authorized Source Depot accounts.
- Using data-center patch management and antiviral processes to enforce remediation and auditing of security vulnerabilities.
- Adding event management software and intrusion detection software to Source Depot server computers.
- Locating the Source Depot server computers in a high-security data-center facility.
For technology companies in particular, most critical information and intellectual property manifest as data in one form or another. Protecting data both at rest and as it moves throughout the network and, in some cases, over the Internet can be a daunting challenge. In many cases, specific classifications of data are governed by regulations that require a certain level of safeguards. Accurately identifying and classifying data is critical to designing and enforcing appropriate data security controls.
Numerous global and local regulations may compliment each other at times and contradict each other at other times. What is permissible in one country may not be permissible in another country. As a global company, Microsoft strives to understand this evolving regulatory environment and then operate within the context this environment.
Microsoft uses the term High Business Impact (HBI) to refer to data whose unauthorized disclosure might cause severe material loss to Microsoft, the information asset owner, or relying parties. The disclosure might be inadvertent or malicious, and the loss would be extremely significant. Internal HBI data might be payroll information, financial information, and intellectual property. External HBI data might be information that is being safeguarded as part of a business process or for customers.
To manage HBI data, Microsoft IT created a classification system and enacted security controls that were weighted to both provide adequate security and enable the business. Hindering the flow of some business information by imposing too-strict security restrictions may reduce agility or profitability, and people may decide to circumvent security controls.
Note: For more information about the Microsoft IT HBI data solution, listen to the Microsoft TechNet Radio session "How Microsoft Does IT: Enabling Information Security Through HBI Information Classification" at http://www.microsoft.com/technet/community/tnradio/archive/oct302007.mspx.
Previous manual scanning efforts had difficulty pacing internal growth and were taxing on support and the business. Microsoft IT developed a solution that combines third-party scanning software with an internally built classification and remediation system. To help enforce the proper handling of sensitive information, the Information Security team scans repositories, such as file shares, for applicable content. If Information Security finds content that is of a higher sensitivity than its classification, the team marks the content for remediation. Information Security also scans to make sure that the repository is labeled. If the repository is not labeled, a process of notification and potential correction begins.
Automated scanning coupled with automated remediation improves the effectiveness of the solution. In this scenario, Information Security has tied in the scanning application with workflows built into service management support tickets. Support tickets are opened, tracked, and closed; and Information Security is able to measure impact, time to resolve, and success in meeting SLAs.
Large Data Repositories
Microsoft uses file shares and SharePoint sites to store and share business information, and some of this information may be sensitive. In total, Microsoft has approximately 74 terabytes of data on managed file-share solutions and approximately 12 terabytes of data on SharePoint sites. SharePoint sites and file shares do not enforce classification or encryption natively, and different repositories house different types of assets with different levels of access. Scanning alone cannot account for this. Establishing an operations team structure that extended beyond Information Security was critical and enabled collaboration with each of the teams that manage the repositories. To make incidents more efficiently managed and resolved, Microsoft IT established operating level agreements (OLAs) between teams and prepared the internal Helpdesk for potential issues.
Another challenge is the ability to scale to meet the number of issues. A business workflow engine helps resolve this challenge and includes an automated identification of an incident, automatic notifications to the user, self-remediation, and potential correction. Because automatic notifications prompt the user to self-remediate, the operations team can focus on and resolve fewer incidents while the business remains enabled.
Microsoft IT replaced a heavily manual, less effective process with an operational solution that can be sustained and actively remediate potential exposure of sensitive data. Five key milestones enabled this solution:
- Microsoft IT implemented proof of concept among several vendors.
- Microsoft IT evaluated business rules against business needs. During this milestone, a prerequisite was understanding what the service architecture should look like and then applying technology to that scenario.
- The teams that manage business platforms, legal affairs, and business units reviewed the methodology and approach.
- The solution was designed to wrap around a scanning tool, creating a hybrid of third-party scanning software and an internally built classification and remediation system.
- Microsoft IT implemented the request for proposal (RFP) process with the vendors that had participated in the proof of concept.
The lessons learned during the development of the data management solution include the following:
- The problem of protecting sensitive data cannot be solved by tools alone. It also requires user education, policy awareness, and policy enforcement.
- Microsoft IT needed to recognize project limits and opportunities, and then be flexible in reallocating resources to accommodate those fluctuations. Another key was ongoing, strong, collaborative relationships with all areas of IT, finance, and legal departments.
- Microsoft IT highly recommends using a pilot program, not only to help determine the right-fit vendor, but also to learn about the enterprise's own information that needs protection and the processes that may help or hinder that effort.
User identity is at the heart of most security controls. A user's specific identity is what allows the user specific levels of access to network resources, as well as the resources on his or her own computer. Security controls concerning identity focus on minimizing the potential for impersonation of user credentials. Single-factor, password-only-based authentication has inherent risks that require mitigation techniques until two-factor authentication is widely adopted and enforced.
Passwords provide the first line of defense against unauthorized access to a computer. The stronger the password, the more protected a system will be from hackers and malicious software. When users apply strong passwords, they are helping to protect the corporate network and the intellectual property that resides there. Challenges in implementing strong password requirements included educating users about the requirement and benefits, and an increased volume for password resets at the Helpdesk.
The improved security of Microsoft digital assets afforded through the implementation of strong password requirements outweighed potential risks, which included alienating clients and a scenario where users synchronize passwords across multiple systems. Controls include configuration of user profiles such that a user must meet the minimum requirements for a strong password, and Group Policy in Active Directory enforce this. Additionally, online training reinforces the minimum requirements for strong passwords and encourages users to go beyond that minimum threshold by teaching them techniques for easily creating stronger passwords.
Future of Passwords: Smart Cards
In January 2008, Microsoft will move to eliminate passwords in favor of a more secure system. Rather than using only one-factor authentication such as passwords, employees will use a combination of PINs and smart cards. Two-factor authentication provides greater protection from external and internal security breaches.
Today, Microsoft IT enforces a policy of strong passwords for all network users. Passwords expire every 70 days, and some of the requirements include:
- Administrator-level passwords must be 15 alphanumeric characters in length.
- User passwords must be least eight alphanumeric characters in length.
- Passwords must contain uppercase and lowercase characters, digits, and punctuation, and also meet other specifications.
- Passwords cannot contain slang, dialect, jargon in any language, or be based on personal information such as family names.
- New passwords must vary significantly from prior passwords.
Many of the techniques, standards, and products available to help secure an enterprise network rely on some form of cryptography. PKI provides security services, such as encryption, authentication, digital signatures, and certificates that each party uses in a cryptographically secured electronic transaction. The 802.1X protocol provides wireless network authentication for the computer and the user. Computer authentication is needed for receiving policy updates and logon scripts. Some of the components of the Microsoft PKI infrastructure include:
- Smart cards
- Required for RAS.
- Capable of interactive smart card logon, but not required until January 2008.
- Capable of S/MIME signature.
- Currently uses certificates chain to a publicly trusted root.
- Encrypting File System (EFS)
- Defines a Domain Recovery Agent (DRA) for each domain to avoid loss of initial default DRA certificates.
- Provides the ability to perform data recovery and key recovery with Windows Server2003.
- Secure Sockets Layer (SSL)
- SSL provides security-enhanced Web services to internal (corporate network) production and test environments, as well as the extranet and the Internet.
- Issued SSL certificates chain to a publicly trusted root authority.
- Used for authentication purposes only, not encryption.
The Microsoft IT and Microsoft Information Security teams rely on managing risk to provide security for the Microsoft network. With a solid framework, policies, and clear roles and responsibilities in place, Microsoft Information Security can identify, prioritize, and evaluate risk in a proactive way. This continuous evaluation process provides decision makers with the data they need to make more informed business decisions that balance risk against the costs of security controls.
Risk is an inherent part of any computer network. New and emerging generations of malicious software threaten the security of the global enterprise from multiple directions. Traditional methods for providing security for the network—inside and at the perimeter—and protecting intellectual property are no longer sufficient. By deploying a comprehensive risk management framework with defined roles and responsibilities, organizations can identify priorities, mitigate threats, and address potential security vulnerabilities.
To address the needs of today's evolving IT environment, Microsoft IT constantly adjusts its security strategies and advances technologies to address remote access, mobile devices, policy awareness, and more. For example, Windows Vista and Windows Server2008 are introducing a new generation of technologies and applications that offer built-in, automated security. Although individual security needs will vary, any organization can employ the Microsoft approach to managing risk or use the IO Model as a roadmap to transform its security controls from reactive to proactive.
For more information about Microsoft products or services, call the Microsoft Sales Information Center at (800) 426-9400. In Canada, call the Microsoft Canada information Centre at (800) 563-9048. Outside the 50 United States and Canada, please contact your local Microsoft subsidiary.
You can also visit the Microsoft Web site at www.microsoft.com.
For more information about IT Showcase, go to http://www.microsoft.com/technet/itshowcase.