Security Entities Building Block Architecture
|Archived content. No warranty is made as to technical accuracy. Content may contain URLs that were valid when originally published, but now link to sites or pages that no longer exist.|
Microsoft Solutions Framework
Best Practices for Enterprise Security
Note: This white paper is one of a series. Best Practices for Enterprise Security ( http://www.microsoft.com/technet/archive/security/bestprac/bpent/bpentsec.mspx ) contains a complete list of all the articles in this series.
On This Page
Within organizations, the expanded uses of connected computers to store, process, and share mission critical data has heightened the need to secure the network. Corporate networks are increasingly depending on the accessibility of their sensitive data from many different places, outside as well as inside the organization.
This dependence requires very careful management of the technologies available to secure the data. The computer industry has developed many technologies to secure specific aspects of the storage and communication processes. These technologies are only as successful as the plan used to combine them.
This document provides information on security-related issues and gives an overview of the risks and countermeasures to consider when planning and implementing security. It also introduces the Security Entity Building Block Architecture (SEBA), which can be used as the foundation for successfully creating and deploying a security plan in any environment.
Questions and feedback are welcome. Send inquiries to: email@example.com
The objectives of security involve guarding data and information technology (IT) networks against many types of threats. Corporations face a challenge in doing this because using the technology to full advantage entails providing access to their corporate networks and the resources these networks contain to users both inside and outside the corporation. Therefore, they need to control the kinds of access that are given to different users for different resources.
First, enterprises need to assure that particular system resources, from the level of the individual computer that contains data and programs, to the entire network, is available to users who are authorized to use those resources, but unavailable to anyone else. Technology offers one way of providing this protection in the access management capabilities of operating systems. Operating systems use security mechanisms to actively manage access to files and other resources. The mechanisms include individual user profiles that identify users and their access privileges, and mechanisms that differentiate between different resource environments. They guard against threats such as intruders who try to gain access to data in order to manipulate or steal it.
Note: The operating systems referred to throughout this paper are Microsoft Windows2000 and Microsoft Windows NT systems. Microsoft Windows 90x operating systems offer less advanced security mechanisms.
Businesses also need to make specified resources available to users such as customers, who are outside their own internal network. Ideally, these users should be able to access the resources at any time under optimal performance conditions—reliable access is a critical part of the value that is provided. Online banking and shopping are examples of two services that businesses provide, using their networks, that involve external users accessing and interacting with corporate data. However, resource availability can be the target of a variety of attacks. The denial of service (DOS) attack, for instance, is a wel1-known type of attack —its objective is to make data access difficult or impossible, resulting in a loss of business for the enterprise.
In addition to protecting data from people who attack systems with the intention of doing harm, security mechanisms also protect organizations and support authorized users by preventing them from accidentally or innocently causing damage to IT resources. This is accomplished by using technological devices in conjunction with security procedures and security policies.
Security Levels and Costs
Achieving perfect security is as much a myth in the IT environment as it is in everyday life. How much security, then, is necessary? The degree of security considered to be adequate for an enterprise is a function of the value of the information to be safeguarded, the threats that this information is subject to, and the exposure to risks that the company is willing to accept.
The next question becomes: What are the costs? These will include additional hardware, software, network, maintenance, and education costs as well as less obvious processing costs. Every security-based service adds a certain level of processing cost to the system. The consequence of this processing cost can be seen in a slowdown of access time or other undesirable symptoms.
Therefore, in addition to understanding all of the available security mechanisms, it is important that an administrator determine the level of security that is needed for specific scenarios in order to avoid unnecessary costs. Decision-makers must determine whether it is worth spending additional funds on security features to protect each resource. Deciding upon the security measures to implement involves finding the optimal compromise between performance (productivity), data protection, and the involved costs. In a good solution:
Only the right people get access at any time to the right information with the best possible performance at the lowest possible costs.
Because computer security involves the enterprise's total set of exposures, from the local workstation or server to the Intranet and beyond, it cannot be attained by simply implementing a "magic bullet" software product solution or by installing a firewall. Computer security must be implemented by reliable mechanisms that perform security-related tasks at each of several levels in the environment. Implementation also involves applying security procedures and policies at each of these levels.
The Need for a Security Architecture
The previous section provides a general definition for the goals of IT security. The next step is to determine the locations in a networked environment where resources have to be protected.
The Complexity of Security Issues
Security has become an essential part of planning and managing information systems. Yet computer security is a very complex, multi-faceted issue involving not only the computer where data originates, but also multiple points throughout various networks through which the data passes. It is, therefore, important to identify and protect each point where security has a potential of being breached. A security implementation is only as secure as its weakest link.
Analyzing Data Flow
In order to isolate points of vulnerability, it is necessary to analyze the data flow through the networks. From this perspective, there are two basic scenarios:
Data stored on a computer. For data stored on a local computer, the operating system is the major provider of the necessary services for the protection task. Using these services requires that they be properly configured.
Data traveling across communication points. Data traveling between locations needs to be secured in a different way, and this often involves encryption. Generally speaking, this data is in one of two forms: data in the form of network packets coming into a system, and data that is leaving the system.
Protecting incoming data encompasses both guarding the data itself and guarding the system against threats posed by the data once it has entered the network. Protection activities include a system check to ensure that the data comes from an authorized sender and that it can perform only authorized tasks.
Protecting data that is leaving a computer involves insuring that it reaches its target in exactly the same format in which it was sent, without being changed. The session and data type, as well as data content, must be unreadable by a third party—that is, privacy must be preserved.
Network Connection Scenarios
A modern corporate network usually offers several possibilities for data to leave or enter a specific computer. Computers can have individual modems with a variety of available connection scenarios. Additionally most computers are connected to a local (internal) network from which data can branch through multiple points to numerous destinations.
Typical network scenarios are:
A corporate network that shares a private network with another company.
A corporate network with Web servers that are located at an ISP, accessible either via dial-up or a permanent connection.
A corporate network with dial-up capabilities.
A corporate network with a permanent connection to the Internet.
The following diagram shows a typical network with multiple data transport connections running between the computers.
Given this complex scenario and the many opportunities it offers for breaches of security, implementing security should be a step-by-step process that starts with the primary local resource where the data is housed, continues through the intervening points, and concludes with the permanent connection to the "rest of the world."
To support this step-by-step security implementation process, a suitable analysis and deployment architecture is needed.
Defining Security Architectural Entities
To construct an architecture that will enable organizations to both analyze security needs and deploy security measures, it is necessary to consider the entire network structure, and then separate that structure into discrete security entities. These entities should be both physical and conceptual. Security exposure can then be determined and security implemented for each entity. The challenge for such a model is to determine the entities or zones. In the architectural model we present, each entity is described and its role in the system is related to the security risk analysis that needs to be performed.
End System Entity
The end system entity, the basic security entity in a network, is a computer with an operating system. In modern networks, computers have specific roles. The differentiation between workstation and server is a very basic categorization commonly used. However, even servers have different roles, such as those of application server and controller.
As a result, it is necessary from the security point of view to classify computers based on their roles and the types of data they handle into different security levels such as high, medium, and low. This system can be thought of as similar to security rankings assigned to government employees who work with sensitive material.
The granularity, or level of detail, of this classification depends upon the security needs in a given environment. The classification ought to be carefully planned and documented in order to identify less obvious potential weak points and to ensure that assigned levels are correct. For example, workstations should generally be graded as high in terms of security risk as servers, despite the common perception that servers are more important. People tend to download confidential data onto their local machine in order to be able to work with it offline. If the workstation happened to be a laptop and the laptop were stolen, the data would be more secure if the laptop had been classified a high-risk end system where the data is secured in an appropriate way.
The next step is to determine the items in the end system entity that should be analyzed for security risks. These include:
Local account policies
Local policies and event logging
File system security
System services security
After the risks have been determined, the last step for this entity is to deploy countermeasures. It is a common strategy to carry out deployment for objects in two steps.
The first step covers all necessary configuration changes that are independent of the type and grade of system being used. One example is the file system permissions for the folder in which the operating system components are stored. Such configurations can be done during an unattended setup process.
The second step covers configuration settings based on the individual needs of a group of systems. These settings are usually implemented via system policies.
To recap, end systems are computer hardware devices with an operating system. The primary security goal is to protect the permanently stored data and services associated with the operating system. We can exclude the available communication devices (e.g.: network adapter, modem, etc.) installed on the device from consideration within this entity. They are part of the local communication system entity.
Local Communication System Entity
The next elementary step for security-related issues is reached when data can be transmitted between computers; this capability is also known as network functionality. The goal of networks is to share local resources with remote computers. An administrator specifies which parts of the local resources ought to be accessible from remote systems. The goal of security services is to assure that only these resources are available for authorized users.
There are different technologies available for accessing remote resources. The main devices are network adapters and dial-up devices that range from a modem attached to a workstation to the corporate Network Access Server (NAS).
A common risk for data that is traveling between network nodes (network packets) is that the packet will be captured, read, and in the worst case modified, by an intruder. The risk that a person other than the designated recipient might, at a minimum, read the content of a network packet was dramatically extended with the increasing use of untrusted networks like the Internet.
Data en route cannot be directly protected by services of the operating system. However, there are different technologies (protocols) available to create a tunnel between two nodes and encrypt the information. All of them have their individual limitations and the decision regarding the appropriate technology or combination of technologies needs to be well planned.
From the security perspective, there are two major issues involved in this exchange of information:
The data that is leaving a computer must reach the target without being read or changed before it reaches its destination.
The packets that are reaching and entering a computer must be from an authorized user and their objective must be to pursue authorized tasks.
Administrative Authority Entity
End systems and local communication systems cannot be managed on an individual basis in an enterprise-based network. The systems of such a network may be distributed over the entire world, and the management task requires centralized security management, also known as administrative authority. If changes in the current configurations are necessary, the administrative authority can make them at a centralized point and distribute them to all appropriate systems. A centralized management system should be scaleable and allow for a fine granularity in its hierarchy.
From the security point of view it is necessary to keep in mind that changes in the configuration will usually not be applied immediately to all systems. It depends, among other things, upon the configuration of the replication mechanism. Centralized security management requires a deep technical understanding of the involved processes. Administrators should plan changes very carefully because of the complexity of the involved mechanisms.
The Administrative Authority can be used only for entities for which it is authorized. Corporate networks are usually connected with networks of other organizations and can have a connection to the Internet. The accounts used in these environments are not under control of the Administrative Authority, and security issues result from connecting with them must be covered in separate entities.
Private Network Entity
A private network entity exists when two or more companies share a private network. This means that a separate administrative authority is given responsibility for a specific group of external accounts that have access to local resources. Access to local resources can be controlled with trusts between the authorities. The private network entity extends the local network with the capability of cross-enterprise networking.
From the security point of view, it is important to plan very carefully the kind of data that will be accessible to users whose accounts are under the control of a remote administrative authority. The network should be structured to avoid giving these accounts direct access to the internal corporate network.
Internet Block Entity
The Internet Entity describes a corporate network that is permanently connected to the Internet. Anyone with an Internet connection can theoretically have access to the company's computers and their resources.
For reasons of security, it is a common technique to locate the servers containing data that untrusted users can access in separate segments of the system. These locations are called the outer perimeter (and are also known as Demilitarized Zones (DMZ)). They are usually secured with firewalls from both sides. Access to corporate data is established from these front-end computers to the internal network.
The higher the internal corporate security, the lower the risk of damage in case an intruder succeeds in crossing the firewall boundary to the internal network. On the inside of this boundary, access to local resources is controlled by the security mechanisms that belong to the local communications systems and end systems entities.
In summary, the security structure of an enterprise can be divided into the following entities:
End systems (computer hardware devices with an operating system)
Local communication systems (network functionality)
Administrative authority (centralized security management)
Private networks (network sharing between companies)
The following diagram encapsulates the relation between the conceptual and physical entities we have been discussing. The entities are referred to as building blocks, and are the foundation of the "Security Entities Building Block Architecture."
The benefit of viewing the network as a set of conceptual building blocks is that it enables us to identify and isolate each entity in order to focus on implementing security within it. It also serves to emphasize that in a sequential manner, beginning with the End System Entity, each building block acts as an extension of its predecessor. If a particular environment is lacking an entity, its discussion can be skipped and the analytical process can proceed with the next entity.
The objective of this framework is not to provide a cookbook on security for all possible scenarios. However, it does provide a structured foundation for looking at security needs and solutions, which helps in developing an actual security concept.
At this point, we have a generic definition of IT security and we have isolated entities for which this definition has to be fulfilled. The objective of this section is to provide a more detailed description of the security goals that need to be addressed.
Types of Attacks
Security services are intended not only to protect resources from malicious attacks but also to make it impossible for users to harm the system by chance, thus enabling them to use it with confidence. These two types of protection provide the first two main categories that security services should address—malicious and non-malicious attacks.
Malicious attacks vary in their targets, methods, and motives but are usually covert acts by individuals who wish to harm the system or prevent others from using it. Following are some examples of types of malicious attacks:
Denial of service (DOS) attacks are mounted with the intention of causing a negative impact on the response time of a system or totally crashing it. They are often targeted at companies that do business on the Internet.
Viruses are another example of malicious attacks. There is a broad range of different viruses that are able to prey on systems. Persons who create and unleash viruses may want to cause mischief, frighten a user, or damage important data. In the case of both viruses and DOS attacks, the motive is not usually to secure a personal benefit, but to interrupt or halt the work of others.
Attacks by hackers or crackers are a third type of malicious attack. They may try to crack passwords or change the content of a message during its transmission. Their motive is often simply to prove that they are able to get past a code barrier that was intended to be unassailable, and part of the challenge they relish is remaining unrecognized.
Attacks for personal gain are a final type of malicious attack. In this case, attackers attempt to penetrate networks in order to steal or change data. Examples include people who try to gain access to credit card numbers, seek information about business competitors' plans or products, and reroute electronic funds transfers into different accounts.
The user who accidentally deletes important operating system files is the type of individual who performs non-malicious attacks. Greater precautions must be taken for users with access to more sensitive data. It is common practice, for instance, for each administrator to have at least two accounts and use only the administrative account with its extra security devices to execute administrative tasks in order to prevent accidents.
Recovery from Attacks
In addition to incorporating methods and devices for preventing attacks, a security concept must consider a scenario in which an attack succeeds. Because it is not possible to guarantee a 100 percent secure environment, it is advisable to assess the risks involved and plan ways to minimize possible damages. This may involve backups and should certainly include procedures for data recovery and restoring the system. Any recovery procedures should be periodically tested to ensure that they are current and effective in restoring system and data availability.
Data availability is the bottom line of security. Clearly it isn't necessary to worry about data integrity if the data cannot be accessed. Users need different amounts of data with different degrees of urgency at different times, often unpredictable; therefore, the security administrator wants to minimize the time that data is not available and a data request cannot be immediately fulfilled.
Several factors influence data availability, and can be grouped into the following categories:
Any important computer should reside in a secure environment, where the room temperature and humidity are maintained at the manufacturer's recommended specifications. It is definitely a mistake to locate an important server that hosts critical data in a public area. Because computer hardware consists of visible devices, a thief would not need a detailed understanding of computer science in order to steal the system or information. A public location would also make the computer more vulnerable to a malicious intent to physically destroy the system or its data and undermine availability.
Redundancy, or fault tolerance, refers to providing duplicate components that support availability by keeping a system running even if one or more components are not working correctly. The most important parts of the system, in particular, should be implemented as clones. Redundant components can be implemented in the form of hard and software components. An example of a hardware component is a mirrored hard drive that can be used if the main hard drive fails to work. Similarly, a Distributed File System (DFS) share that represents different identical shares can be used if one of them becomes inaccessible.
Emergency repair procedures for certain tasks, such as those involving restoration-of-data concepts, are an extension of the redundancy philosophy. If an intruder has been able to cause damage to the data in an environment, emergency repair procedures should be able to get a system back to a "last known goods" status with a minimum of data loss. There should also be procedures for unattended setup methods for workstations that need to be brought back to a clean working status after an attack has taken place.
Security Concepts Terminology
The steps discussed so far provided answers for general security issues. It is now time to take a look at more complex security demands, and this involves understanding the more subtle terminology of computer security. There is disagreement about the exact meanings of some expressions, and it is advisable for a team that is formulating a security concept to agree upon a terminology. Common definitions and variations can be found at "The Consolidated Security Glossary of the National Institute of Standards and Technology (NIST)," < http://csrc.nist.gov/publications/ > and are repeated in the following sections in italics.
Data Protection Processes
As mentioned earlier, an optimal security solution involves compromises. In order to determine the best one, it is very important to specify security requirements in a detailed manner. The following sections explain these requirements.
Identification and Authentication
Identification: "Process that enables recognition of an entity by an IT product."
Authentication: "The process by which the identity of an entity is established."
To ensure that only the appropriate people can have access to resources, it is first necessary to identify users. This requires an authentication process where users must prove their identity; this process is also known as identification. It is accomplished during a logon process, which is usually the first step users must perform in order to gain access to any data in a given environment. The authentication process validates a user's user name and password against a database. Only when users are authenticated to the system is it possible for them to request access authorization for specific data.
This process comes with many weak points. Users are usually identified on the basis of three pieces of information they provide:
The user name is the unique identifier, recorded in the account database that distinguishes the user from others in a given environment. The password is the primary control mechanism. It is also the weak point in this process because users tend to use "easy" passwords that are easily recalled when they want to log on.
The computer industry offers a number of different answers to solve the problem of identifying information that is easy to discover and duplicate. They range from the requirement of a "strong password" to biometric input of ID information in the form of a smart card or a fingerprint.
"Process of limiting access to the resources of an IT product only to authorized users, programs, processes, systems, or other IT products."
A primary objective of an operating system is to maintain the system resources. Each operation that accesses these resources is designed with an accompanying access control procedure. This built-in control supports security by assuring that only "the right people" get access.
In general, access control refers to a process of the operating system in which access rights to a given resource are validated based on access control lists associated with the resource. The security authority compares the access token with the content of the access control list in order to determine the resulting rights for the resource. Note that the access control process limits the access not only of users, but also of programs, processes, systems, and other IT products, to system resources.
"The granting of access to a security object."
"The process by which an access control decision is made and enforced."
The exact role that authorization has in connection with access control is not well defined in the literature. A common description is that authorization occurs in the context of the authentication process.
After a user has been successfully authenticated, the operating system examines the privileges (for example, "log on locally") that are assigned to the user account. If the database shows that the user has been granted permission to log on to a given system, the additional privileges are added to the user's ID (Access Token).
Another view of the authorization process is that it is used by the access control mechanism to validate the access rights of a user to a given object.
Finally, authorization is described in some documents as the process of granting access rights to resources by an administrator.
"The prevention of the unauthorized disclosure of information."
"A security property of an object that prevents:
- its existence being known and/or
- its content being known."
"This property is relative to some subject population and to some agreed degree of security."
"Assurance that information is not disclosed to inappropriate entities or processes."
"The property that information is not made available or disclosed to unauthorized individuals, entities, or processes."
The objective of confidentiality is similar to that of access control. However, this term is more commonly used in conjunction with cryptography, which is used to protect information that is traveling between nodes in a network. Encrypted information has been coded so that it cannot be readily understood if it is intercepted. Confidentiality in its broadest definition extends the philosophy of access control to protect from disclosure not only the data itself, but also the identity of the operation that is being performed by a user.
The importance of confidentiality in the broader sense can be understood by considering a user's session with a bank. The security mechanisms should guarantee that no one can discern whether the action a user is performing is checking account information or transferring money. The absence of this aspect of confidentiality could lead to malicious attacks on certain activities.
"The state that exists when computerized data is the same as that in the source documents and has not been exposed to accidental or malicious alteration or destruction."
Data integrity serves as a control mechanism for other security services such as access control for data stored on a hard drive and data that has traveled "on the wire" (over the network). The security-related objective is to solve the problem of undetected corruption or modification of data. Integrity services assure that a problem does not exist by verifying that the content of a message has not been modified, and if a sequence of messages is transferred, that the sequence has been preserved. This is especially important when data is traveling over the network and cannot be protected by any security services of the operating system.
"Denial by one of the entities involved in a communication of having participated in all or part of the communication."
This concept involves a mechanism that is used to prove the identity of the sender of a specific message. It is an extension of authentication, but is usually used for legal purposes. An example would be an e-commerce application in which the mechanism was used as proof by a recipient to a third party (judge or jury) that a sender's denial of sending a message was false.
There are four major scenarios involving non-repudiation:
Proof of sender (origin). Protects the recipient from the sender's denial that the data was sent by that user.
Proof of submission. Protects the sender from someone disputing the sender has actually submitted the data.
Proof of delivery. Protects the sender and/or the service provider from someone disputing that the data has been delivered to the correct destination.
Proof of receipt. Protects the sender from the recipient's denial of ever having received the data.
Building Block Structure
The security entities defined in the previous section are the framework for the Security Entities Building Block Architecture (SEBA) introduced in this document. Each of these entities—or building blocks—contains an analogous internal structure of issues and components that need to be reviewed, analyzed and implemented in order to apply security. Although the component sets are similar, the specific issues are somewhat different because each entity focuses on a different environment.
Diagram 3 shows the set of components that need to be addressed to secure the administrative authority entity.
The following section defines the components within each entity, beginning at the base of the list.
Security policies are derived from business security needs. Policies define rules for the basic tasks in a given environment, and they can be written or automated within programmed processes. Written policies are intended to guide the actions of system users.
Availability, as already mentioned, is an important aspect of security. Each entity usually provides some services that fulfill this requirement.
The first step in analyzing the risks associated with availability is always to identify Single Points of Failure (SPOF) and to evaluate the impact on the system if the SPOF fails. The impact of the SPOF determines the grade of availability that has to be reached.
Availability is usually graded in percent. Systems that host mission critical data should plan to provide availability in the range of 99.0 to 99.5 percent. A requirement of 99.95% and above is also known as High Availability.
Authentication is, as previously mentioned, the basic mechanism used by automated security services. Each time a resource is accessed, the result of an authentication process is used. The particular mechanisms vary, based on the different authentication and identification methods available within the entity.
Keeping the wrong people away from information is about more than just keeping things secret. It also involves ensuring that the information remains complete, intact, and uncorrupted, and that no one is able to determine the type of operation a user is performing.
The traditional security services provided by an operating system acted as a guard that has the objective of controlling access to resources. This includes all the functionality included in authorization and access control and it can be summarized as access management. Because these services can only perform their job as long as the operating system is online, they provide active security.
However, these services alone do not meet all security requirements. The device driver for MS-DOS, which enables a user to read NTFS partitions, is an example of a device that can be used to bypass traditional operating system access control mechanisms.
A modern operating system should also provide passive components in the form of encryption. Data encryption increases the grade of security because additional steps are necessary to retrieve a readable form of data, which makes it more difficult for even an intruder who has successfully penetrated the access management barrier to read the data. However, the use of these services has to be well planned, because they add additional cost (at least in the form of CPU cycles) to de- and encrypt a given resource.
Each entity provides a collection of counters or other kinds of status information that can be monitored and logged. In order to retrieve meaningful information about the current security status of a given system, it is necessary to carefully configure it. The process of configuring and evaluating these parameters is known as security analysis.
If security policies, availability, authentication, and resource protection are the ways to provide security, security analysis is the way to measure results. When security is important, it is necessary to verify the status of the security mechanisms that are being used. This allows the system to detect possible security risks. The following three tasks are used to verify the status of the security mechanisms:
Logging. Typical log entries include such items as device driver failures, data errors from network cards, or unsuccessful logons. Logs are kept of all processing transactions and can be retrieved for analysis.
Long- and short-term monitoring. Usually information that is retrieved by a logging mechanism can only be used as an indicator that something is potentially wrong. In order to get to the bottom of a problem, it may be necessary to monitor a specific area of the system. The collected data can be displayed in different ways (for example, graphically through charts or as text through reports) to supply the particular focuses, details, and kinds of explanation that are needed. Monitored data can be displayed in real time or collected in logs for later analysis, depending on the problem that is being isolated.
Alerting. An alerting system notifies an administrator of a problem in real time and provides the opportunity to initiate corrective actions.
Thoughtful use of monitoring tools can provide an administrator substantial feedback on the effectiveness of the organization's security and security policy. If security policies are the starting point for achieving security within each entity of a corporate network, security analysis completes the process, "capping off" the entire effort.