Data Security and Data Availability in the Administrative Authority

Archived content. No warranty is made as to technical accuracy. Content may contain URLs that were valid when originally published, but now link to sites or pages that no longer exist.

By Kenneth Pfeil

Contributor: David Swartzendruber

Microsoft Solutions Framework

Best Practices for Enterprise Security

Note: This white paper is one of a series. Best Practices for Enterprise Security ( https://www.microsoft.com/technet/archive/security/bestprac/bpent/bpentsec.mspx ) contains a complete list of all the articles in this series. See also the Security Entities Building Block Architecture ( https://www.microsoft.com/technet/archive/security/bestprac/bpent/sec2/secentbb.mspx ).

On This Page

The Focus of This Paper
Part 1: Goals for Data Security and Data Availability
Part 2: Data Availability
Part 3: Data Security
Other Informative Sources

The Focus of This Paper

This paper presents an overview of the key points of data security and data availability in the Administrative Authority. The goal of a totally secure environment is unrealistic; no environment is invulnerable. Indeed, the more complex your environment is, the more difficult it is to secure it effectively. By building on the fundamentals presented in other white papers combined with the material in this paper, you can create an environment that is much more secure.

Every security plan depends on numerous factors including policies, budget, staff, technology, and the practicality of your organization. Each one of these factors has associated costs (both monetary and intangible) that must be considered carefully before implementation. With the Administrative Authority there is a much greater room for error throughout the enterprise. Therefore great care and caution is both a necessity and prerequisite for safe computing.

The terms "Enterprise Environment" and "Administrative Authority" are used interchangeably throughout this paper as the fundamentals apply in both cases. This paper contains four sections:

  • Specific goals of data security and data availability. This section outlines the goals for various responsibilities within Enterprise Environment computing.

  • Data Availability. Definition of and recommendations for keeping data available in the Enterprise.

  • Data Security. Definition of and specific ways data is secured in the Enterprise Computing environment. Also provides an introduction to firewalls, intrusion detection, digital forensics including specific steps for identification/remediation of penetration and how best to obtain and preserve this evidence for further analysis and/or prosecution.

  • Administrative Scenarios and Best Practices. Various real-world best practices used to ensure data availability and data security.

This paper covers security information obtained from Microsoft Consulting Services best practices as well as theory; techniques and recommendations already established by respected governing standards bodies and noted independent authorities such as ICSA, CSI, ISSA, SANS, ISC2 and IETF.

Part 1: Goals for Data Security and Data Availability

For the User

User responsibility within the enterprise environment is the cornerstone upon which resultant security practices are built. As the saying goes, a chain is only as strong as its weakest link. The average user does not even consider security. Rather, security is generally considered to be the job of someone else. The reality is just the opposite; security is everyone's business. If a user's password has not been changed in several months or is the same as the account name, breaking into and controlling the entire environment via the concept of elevated privileges becomes a distinct possibility for the hacker. Time is on the hacker's side. Even though policies may have been established in the organization to prevent these types of occurrences, they can and do happen. Remember, users go home, too. And in cases where they work on corporate tasks at home, the enterprise environment becomes larger.

For IT/System

For some, IT/System personnel have the daunting task of helping forge and enforce security policies, as well as performing tasks described later in this paper. For others, it will probably be an extension of the Administrator role. This will probably differ with your organization, budget and scope of job function as defined by your company.

For Administrators

Administrators have many tasks and responsibilities, too numerous to list in detail here. Some of their tasks include items mentioned herein, others extend the realm of responsibility to include the roles of mentor, trainer and guardian of all things that whir, blink and beep at the enterprise, domain, site or organizational levels. The particular level of these responsibilities differs from company to company or even organization-to-organization.

Part 2: Data Availability

What Is Data Availability?

The dictionary defines availability as that which is "present and ready for use; obtainable." You might therefore assume that Data Availability means having your data accessible and obtainable at all times. In the enterprise environment it is not quite that easy. There are quite a few factors to consider including:

  • Available bandwidth between devices and network connections of mediums

  • Mechanisms for high availability and their own security and accessibility

  • Prioritization and type of data to be made available

  • Recovery roles and responsibilities

  • Type of file system and level of access

  • Type of storage/retrieval device or media including both hardware and software

  • Service Level Agreements between responsible and affected entities

  • Processing overhead of affected mechanisms

  • Disaster Recovery/Business Resumption Plan (BRP) (BRP is discussed later in this paper.)

Proper planning is essential. Every conceivable aspect and scenario must be carefully evaluated when assessing how to make your data available. The "Who, What, When, Where, How and Why" method is strongly recommended.

Considerations for the Enterprise

Data availability in the enterprise environment is much more complex than it appears upon first consideration. Every system in the enterprise must be road mapped, its importance determined, and failsafe mechanisms addressed for every conceivable scenario. By first obtaining the big picture and defining system roles and their importance, you can then address the smaller issues which are broken down into:

  • Organizational roles and policy

  • Data to be made available

  • Cost of service (and who is charged)

  • Probability of attack, and estimated total loss to service or business

  • Specific technologies to be employed for availability

Denial of Service

What Is It?

Denial of Service (DoS) is a means of denying resources to a legitimate user or process by means either intended (SYN Flooding) or unintended (accidental user disk consumption). There are many different kinds of DoS attacks that are becoming more commonplace every day. Consider the highly publicized wave of Distributed DoS attacks that affected the operation of several major Internet Sites. Although certainly not a new kind of attack, hackers are becoming increasingly more creative and are using distributed computing techniques to multiply and amplify their damaging efforts. When coupled with the script driven nature and widespread availability of these tools, it is relatively simple for the average computer user to create a vast amount of disruption using limited resources.

Reducing Your Exposure

How can you reduce your exposure? Here are some tips:

  • Applying IP egress and ingress filtering rules as discussed in RFC 2267 ( ftp://ftp.isi.edu/in-notes/rfc2267.txt ) is certainly a good place to start. ICSA ( https://www.icsa.net/ ) basic modeling showed that if only 30 percent of Internet routers and organizational firewalls applied these rules, it would result in an approximate hundredfold reduction of this threat from a single attacker.

  • If you don't have a response team or policies for these types of incidents, create them. Early identification and reaction is crucial to limiting your exposure.

  • Establish baseline load and traffic volume patterns for exposed systems. If these patterns become noticeably exceeded, an early warning system should trigger. Most enterprise Intrusion Detection Systems now contain this functionality.

Registry Settings

The following table recommends settings that should be applied to Windows NT and Windows 2000 Systems; more details are available at https://www.microsoft.com/technet/archive/security/prodtech/windows/iis/dosrv.mspx

Recommended Settings to be Applied to Windows NT and Windows 2000 Systems

SynAttackProtect
Key: Tcpip\Parameters
Value Type: REG_DWORD
Valid Range: 0, 1, 2
0 (no synattack protection)
1 (reduced retransmission retries and delayed RCE (route cache entry) creation if the TcpMaxHalfOpen and TcpMaxHalfOpenRetried settings are satisfied.)
2 (in addition to 1 a delayed indication to Winsock is made.)

Default: 0 (False)
Recommendation: 2

Description: Synattack protection involves reducing the amount of retransmissions for the SYN-ACKS, which will reduce the time for which resources have to remain allocated. The allocation of route cache entry resources is delayed until a connection is made. If synattackprotect = 2, then the connection indication to AFD is delayed until the three-way handshake is completed. Also note that the actions taken by the protection mechanism only occur if TcpMaxHalfOpen and TcpMaxHalfOpenRetried settings are exceeded.

TcpMaxHalfOpen
Key: Tcpip\Parameters
Value Type: REG_DWORD—Number
Valid Range: 100–0xFFFF

Default: 100 (Professional, Server), 500 (advanced server)
Recommendation: default

Description: This parameter controls the number of connections in the SYN-RCVD state allowed before SYN-ATTACK protection begins to operate. If SynAttackProtect is set to 1, ensure that this value is lower than the AFD listen backlog on the port you want to protect (see Backlog Parameters for more information). See the SynAttackProtect parameter for more details.

TcpMaxHalfOpenRetried
Key: Tcpip\Parameters
Value Type: REG_DWORD—Number
Valid Range: 80–0xFFFF

Default: 80 (Professional, Server), 400 (Advanced Server)
Recommendation: default

Description: This parameter controls the number of connections in the SYN-RCVD state for which there has been at least one retransmission of the SYN sent, before SYN-ATTACK attack protection begins to operate. See the SynAttackProtect parameter for more details.

EnablePMTUDiscovery
Key: Tcpip\Parameters
Value Type: REG_DWORD—Boolean
Valid Range: 0, 1 (False, True)

Default: 1 (True)
Recommendation: 0

Description: When this parameter is set to 1 (True) TCP attempts to discover the Maximum Transmission Unit (MTU or largest packet size) over the path to a remote host. By discovering the Path MTU and limiting TCP segments to this size, TCP can eliminate fragmentation at routers along the path that connect networks with different MTUs. Fragmentation adversely affects TCP throughput and network congestion. Setting this parameter to 0 causes an MTU of 576 bytes to be used for all connections that are not to hosts on the local subnet.

NoNameReleaseOnDemand
Key: Netbt\Parameters
Value Type: REG_DWORD—Boolean
Valid Range: 0, 1 (False, True)

Default: 0 (False)
Recommendation: 1

Description: This parameter determines whether the computer releases its NetBIOS name when it receives a name-release request from the network. It was added to allow the administrator to protect the machine against malicious name-release attacks.

EnableDeadGWDetect
Key: Tcpip\Parameters
Value Type: REG_DWORD—Boolean
Valid Range: 0, 1 (False, True)

Default: 1 (True)
Recommendation: 0

Description: When this parameter is 1, TCP is allowed to perform dead-gateway detection. With this feature enabled, TCP may ask IP to change to a backup gateway if a number of connections are experiencing difficulty. Backup gateways may be defined in the Advanced section of the TCP/IP configuration dialog in the Network Control Panel. See the "Dead Gateway Detection" section in this paper for details.

KeepAliveTime
Key: Tcpip\Parameters
Value Type: REG_DWORD—Time in milliseconds
Valid Range: 1–0xFFFFFFFF

Default: 7,200,000 (two hours)
Recommendation: 300,000

Description: The parameter controls how often TCP attempts to verify that an idle connection is still intact by sending a keep-alive packet. If the remote system is still reachable and functioning, it acknowledges the keep-alive transmission. Keep-alive packets are not sent by default. This feature may be enabled on a connection by an application.

PerformRouterDiscovery
Key: Tcpip\Parameters\Interfaces\Value Type: REG_DWORD
Valid Range: 0,1,2
0 (disabled)
1 (enabled)
2 (enable only if DHCP sends the router discover option)

Default: 2, DHCP-controlled but off by default.
Recommendation: 0

Description: This parameter controls whether Windows 2000 attempts to perform router discovery per RFC 1256 on a per-interface basis. See also SolicitationAddressBcast.

EnableICMPRedirects
Key: Tcpip\Parameters
Value Type: REG_DWORD
Valid Range: 0, 1 (False, True)

Default: 1 (True)
Recommendation: 0 (False)

Description: This parameter controls whether Windows 2000 will alter its route table in response to ICMP redirect messages that are sent to it by network devices such as a routers.

DFS and Data Distribution Mechanisms

One avenue for availability is the Distributed File System (DFS). DFS is an add-on for Windows NT 4.0 that allows you to create a logical directory of shared directories spanning multiple machines. DFS is an integral part of the Windows 2000 design. The service is installed automatically. As with any technology, certain criteria should be met and a proper design followed and implemented to eliminate any unnecessary risk or exposure of data to unintended entities. A well-conceived and properly planned design is a prerequisite for any type of data availability environment.

Some improvements in the Windows 2000 DFS over version 4.x include:

  • The DFS service is installed automatically with Windows 2000.

  • The DFS service can be paused and stopped, but not removed from the administrative console.

  • DFS is integrated into the Active Directory namespace for domain-based DFS.

  • DFS roots hosted by more than one domain controller eliminate the root as a single point of failure.

  • Support for the File Replication service to permit automatic replication of file changes between DFS replicas.

  • The DFS administrative tool is now graphical by way of MMC.

  • Status flags indicate the availability of replicas.

  • DFS links can connect to other links on other Windows 2000–based servers without a fresh referral.

  • The expiration (TTL) of referrals that are cached by DFS clients is configurable on links in the DFS namespace.

  • Dynamic configuration of the DFS topology. You do not need to restart the server when adding or removing DFS roots.

  • Support for Cluster service.

Cc722918.datasc01(en-us,TechNet.10).gif

The components that make up the console, service, and client under Windows 2000

Excellent sources of information on DFS can be found in the following resources:

Business Continuity Planning

Business Continuity Planning consists mainly of Disaster Recovery Planning (DRP) and Business Resumption Planning (BRP).

The Disaster Recovery Plan (DRP)

The Disaster Recovery Plan (or DRP) consists of specific action items critical to Business Continuity, as well as outlining and detailing specific procedures for various events that may potentially disrupt business capacity in case of man-made or natural disasters. Some typical phases in the Disaster Recovery Plan process include:

  • Awareness and Discovery

  • Risk Assessment

  • Mitigation

  • Preparation

  • Testing

  • Response and Recovery

The Business Resumption Plan (BRP)

The Business Resumption Plan (or BRP) covers the operational aspects of the Business Continuity Plan (BCP) and is critical to data availability. Specific roles are defined, budgeting addressed and selected critical phases of the DRP defined and provisioned.

Offsite Storage Considerations

Offsite storage is used with increasing frequency today in the business and corporate environment. You back up your data, someone picks it up, and stores it offsite. To retrieve your backup, you make a request, and someone brings the tape back onsite for restoration. Often little thought, however, is given to important items such as:

  • Chain of custody. This details who has access to your data at any given time. Does this person have to sign your media out of a data center to access it? Or do they just casually pick it out of the library and send it to you by courier? What identification/credentials must be presented for them to access your data? What identification/credentials must be presented to them to relinquish a copy of your data?

  • Storage environment. What climate is your data stored in? Is there a possibility of confusing your data with that of someone else?

  • Construction. The physical construction of the facility where your data will be stored. What happens to your data if their facility catches fire, floods, is bombed, etc?

A certified business continuity planner (CBCP) will help you to identify more areas of concern, but this should give you an idea of some frequently overlooked areas.

Hot/Cold Sites

If your budget permits, having an alternate site is recommended. In case of natural disaster, fire, flood, etc, how will your business continue? There are three types of alternate sites available for business recovery: hot, warm and cold sites. A hot site is a completely functioning alternate site in another independent geographic location complete with backup systems and a duplicate and current environment. This is the most painless way to recover from a catastrophe or calamity, however, it is certainly the most expensive. There are third party companies that outsource this type of site. A warm site contains all the systems necessary to facilitate resumption of business, but does not have current data. A cold site merely contains system, personnel space and the network connectivity necessary to facilitate resumption, but does not have any systems or data. Regardless of what your particular needs may be in such a situation, this decision must be carefully evaluated and made part of your operating budget if adopted.

Backup Plan

A backup plan is exactly what the name implies. All data should be backed up and available for restoration at all times. A sample backup plan follows:

  • Evaluate disaster recovery plans to ensure they adequately meet business requirements. A plan should include:

    • Risk analysis for each area including impact analysis, acceptable downtime and disaster definition.

    • Prioritized tiers based on risk analysis.

    • Key user, data processing, and vendor personnel should be identified.

    • Levels of disruption including full, program-level, database.

    • Documented scenarios for each level of disruption, which include specific procedures and assignment of responsibilities.

    • Prioritized application software and data.

    • Retention of source documents and re-input of data

    • Security requirements for alternate processing environment.

  • Verify that all information and production resources required to resume processing are appropriately backed up.

    • Identification of critical systems and application data and programs, equipment, communications requirements, documentation, and supplies.

    • Ensure that all critical data and programs are being backed up and stored off-site on a regular basis.

    • Provisions for hardware recovery including contracts for hot or cold sites, vendor replacement commitments, and/or good neighbor (for example, borrow server from another LAN, etc.) agreements.

    • Procedures for reinstallation of data and programs, including any dependencies.

    • Identify interdependencies between business units, functions and/or application systems.

    • Determine if backup jobs are scheduled in an automatic and/or manual fashion and that they have been run as scheduled.

    • Verify that backups of all LAN applications are performed nightly and properly labeled.

    • Verify that backup media (for example, tapes, diskettes, source code) are stored in a physically secure location, both onsite and offsite on a regular basis.

    • Identify procedures for reinstallation of data and programs, including any dependencies.

    • Verify that remote employees are reminded to backup critical files from their PCs to a specified user share on the corporate network which is included in the backup strategy regularly (for example, weekly or monthly).

    • Verify that the offsite storage facilities for backups are included in contingency plan.

    • Verify that all backups are properly labeled.

    • Verify that the backup media is kept in a fireproof safe or other tamper-proof/element resistant mechanism.

  • Ensure specific procedures are in place to adequately review recovery procedures including:

    • Assignments of responsibility to review and update the recovery plan to ensure that all copies reflect current conditions.

    • Timely review and approval by appropriate levels of management.

    • Authorized distribution list for contingency plans.

  • Backup and recovery plans should be tested on a periodic basis for:

    • Verification that all data and programs are available in off-site storage.

    • The ability to reconstruct systems from backups.

    • The ability to recover at an alternate site.

  • Verify that all critical file servers, database servers, and special device PCs have proper power backup and mirrored disks if critical.

    • Ensure each File Server, Database server, and special device PC has a separate Uninterrupted Power Supply (UPS) attached.

    • Ensure UPSs are tested on a regular basis, or based on the manufacturer's recommendation.

    • Determine if critical servers and special device PC's are mirrored.

    • Walk through process to see what happens after hours if backup process is unattended.

Typical Tape Backup Rotation Strategy

To conserve tapes, maintain simplicity and insure sufficient retention of history, a five-day tape rotation schedule should be implemented using a different tape for each day of the week and each Friday (week-ending). Weekday tapes are retained for one month. (See the following figure.) Other parts of the strategy include:

  • The tape used on the last day of the month is rotated out of service and retained for 12 months. At the end of each year, the tape used on the last backup of the year is rotated out and retained indefinitely. Using this schedule, any file that is stored longer than one day, but not past Friday backup, will be recoverable if restore is requested within five business days of creation date.

  • If the file is recorded in a Friday backup, but not month-end, the file is recoverable if requested within 20 business days.

  • If the file is recorded on a month-end backup, but not year-end backup, the file is available for recovery for approximately up to one year from creation date.

  • Files that are recovered on year-end backups will be recoverable indefinitely based upon the retention duration of year-end tapes.

Important Note

The backup tape name (Monday, Tuesday, etc.) does not reflect the day that the backup occurs. The tape name indicates what day the tape should be inserted. Typically the tape name indicates the day before the backup is performed, as most backups take place in the early morning hours when the least amount of local activity occurs. The following table contains a common server backup set.

Common Server Backup Set

Tape name

Mon.

Tues.

Wed.

Thurs.

Fri.

Day tape inserted into server

Mon.

Tues.

Wed.

Thurs.

Fri.

Actual backup day & time

Tues. (1am)

Wed (1am)

Thurs. (1am)

Fri. (1am)

Sun. (10am)

Cc722918.datasc02(en-us,TechNet.10).gif

Tape backup procedure

Many variables exist and must be planned for to ensure data availability. A cohesive strategy is merely the first step in an evolutionary process that must be continually evaluated, updated and implemented. Executive sponsorship and budgeting is an absolute must.

Part 3: Data Security

Explanation

Data Security is more than just applying patches, hot fixes and registry settings. It is in a continual state of flux. Its ultimate success or failure depends mainly upon knowledge, resources, technology, diligence, training, response, and time. While there are many different facets to Data Security we will explore only a few in this paper. More information and resources are provided at the end of the paper.

Differentiations and Special Requirements of the Enterprise Environment

What Makes the Enterprise So Different?

The enterprise is quite unique in many aspects of Data Security. Cross functional lines of business, politics and differences of opinion in deployable technology often create areas that must be clearly defined. Indeed it is more important that these areas are determined in advance than the who, what, where and when of the underlying technology itself. The amount of room for error is obviously greater, so why shouldn't securing it properly take greater effort? Rules that apply on an organizational level may have no meaning unless used as a foundation. Many avenues for penetration exist within the enterprise. When you add telecommuters, remote access and Virtual Private Networks to the mix, your enterprise extends to far more than merely desktops and servers. In the enterprise you should have at least one strategy developed for each environment and an overall strategy based on a combination of all.

Common Mistakes and Backdoors

Common mistakes are likely to be unwittingly introduced into your environment through backdoors. These backdoors vary in significance and introduce different levels of compromise that can be avoided if a little care is taken. Some of the more common mistakes include:

  • Installing server software on desktop machines. If an attacker compromises your network, he can then install backdoor programs on your desktop systems. Unlike servers, these systems are usually neglected in that proper diligence in securing them is usually not taken.

  • Installing desktop software on servers/gateways/routers. E-mail programs, client remote control software and chat type programs pose an unwarranted and needless risk. Keep it as simple as possible without sacrificing carefully evaluated functionality.

  • Keep your client population informed as to current risks. This is easily accomplished via voicemail broadcast messages and e-mail newsletters. An uninformed or misinformed user population is vulnerable. Let them know the threats, whom to contact, and have a response team ready to deal with potential incidents.

  • Failure to keep updated security patches delivered to an evaluated environment. The RDS exploit described at https://www.microsoft.com/technet/security/bulletin/ms98-004.mspx is a prime example of a preventable exploit that has had a fix available since 1998. Yet even with the available fix, it remains the fourth most common vulnerability according to a recent SANS security survey.

  • Leaving the default and sample files located in default directories gives an attacker a potential edge by decreasing the amount of reconnaissance needed to inspect an environment. It also decreases the probability that the intrusion will be noticed.

  • Allowing the user population to post to mail lists, Internet discussion lists, etc., using their corporate email accounts or credentials without careful scrutiny presents risks. This behavior can give away information about your company such as infrastructure, mail and relay servers, and IP addresses. It also gives spam trawlers something to do, increasing the overhead of mail servers.

Physical Security and Hardware Lockdown

Considerations and Best Practices

Special care and consideration must be taken as to placement of devices. Placement should be device specific and policy must govern access to the device based on a need-to-access nature, versus a want-to-access nature.

Policy, Placement, and Procedures

Policy

In order to have effective physical security, it is first necessary to have proper policy governing physical access to the device to be secured. All operations and functions that can be controlled remotely should be.

Placement

Placement of devices is dependent upon the level of physical access needed. For instance, the average user does not need physical access to any servers.

Intrusion Detection for the Enterprise

Understanding Data Traffic Patterns and Recognizing Deviation

In order to recognize deviation from normal traffic on your network, it is first necessary to create a footprint of your enterprise. Footprinting is a technique used to establish baselines that determine if a given traffic pattern is out of the ordinary for a given situation. Depending on the device being baselined, your traffic pattern will differ. Consequently, your monitoring strategy should be adjusted accordingly. For example, a Primary Domain Controller's traffic patterns differ substantially from that of a print server.

Normal Intrusion Detection Systems work from two basic models:

  • Models that detect anomalies and changes from established baselines (or anomaly detection).

  • Models that detect deviation from a standard signature file (normally provided with the IDS).

Some examples of abnormal traffic behavior include:

  • Port scans. Opening of ports in an attempt to enumerate services.

  • Attempted DNS zone transfers.

  • E-mail reconnaissance. Repeated failed attempts to variations of a user's name may give away information about infrastructure.

  • Ping sweeping. Pinging multiple hosts on a network in succession in an attempt to enumerate information about infrastructure and placement of devices.

  • Web snaking. Multiple Get commands that download the entire contents of a website and attempt to enumerate subdirectories or download scripts.

Buffer Overruns and Hostile Mobile Code

Explanation and Examples

Buffer overruns are normally caused by programming errors. Depending on the severity of the error, it is sometimes possible to execute arbitrary code on the target system. Obviously this is not desirable. The majority of these types of problems is caused by improper string handling. Two examples of improper string handling include:

  • Allowing input of a string too large for the buffer (as is the case below) without providing error checking.

  • Failing to take into account the extra byte for the null terminator.

Buffer overruns occur when too much information is put into a given window (or buffer). If a line of code designates handling a block of memory fixed at 50 bytes, and 51 bytes are fed into this space, the remaining byte must go somewhere. The effects are usually not desirable.

To understand this more clearly, consider the following example:

#include 
int main ( )
{
 char name[49];
 printf("Input your name: ");
 gets(name);
 printf("Hello, %s", name);
 return 0;
}

This is an example of a "Hello World" type program. The character length is fixed at a length of 50 (one character is reserved for the null terminator, so 49 plus one equals 50). If someone inputs a name of 51 characters it would overflow the buffer as it is fixed at 50 bytes.

For a more detailed explanation and further examples on this, the following resources are recommended:

Trojans, Controls, and Worms

Trojans are programs that do something other than what they appear to do. A Trojan program can create a number of different effects ranging from deleting or modifying files to giving remote control access to the affected computer. Trojans normally rely upon user intervention to carry out their potentially devastating routines. These programs can be wrapped or joined to another program (such as a screen saver or game) decreasing the odds of detection before affecting the target computer.

Controls such as Active-X can be programmed to do a myriad of undesired things, even though they are usually signed. Clients should be educated on these associated risks through the development of client education programs and policies. A signed control only guarantees authenticity of the signer, it does not guarantee the control is safe.

Worms have the ability to self-replicate through your enterprise very quickly, with little or no user intervention. They are normally contained within e-mail messages. Their payload generally varies between the destructive and annoying. As anti-virus programs have historically been reactive to viruses rather than proactive, staying informed is an essential item in any security plan.

Firewalls and Screened Subnets

Firewalls and screened subnets (formerly known as DMZs) are commonly used throughout enterprise environments to supplement security, not replace it. A firewall can only reduce the risk of a security breach, not guarantee elimination of compromise. No device, firewall or operating system should ever be considered impervious. A firewall is normally either a software/operating system based or hardware/appliance-based device with at least two network interface cards installed. A firewall is generally categorized into two basic models:

  • Ones that allow all traffic to pass unless specifically denied

  • Ones that deny all traffic unless specifically allowed

Typical enterprise capable firewalls used today default somewhere between the two basic models. Most allow only the most commonly used ports open by default. Rules and routes are defined according to policy, needs and placement from there.

While it is fairly easy to see the benefits of a firewall, its limitations are not so easily discerned. Some of its limitations include:

  • Firewalls cannot protect traffic that is not sent through it. Most companies always have at least one user with a rogue modem installed that the user accesses frequently from home. Proper policy enforcement helps prevent this.

  • Firewalls do not check data integrity or authenticate datagrams at the transport or network layers. This leaves the firewall susceptible to forged packets, spoofing and enables viruses to pass through.

  • Firewalls are no substitute for trained personnel. Today's firewalls require knowledgeable, trained administration and configuration. It only takes one misconfiguration on the firewall level to achieve catastrophic consequences.

A simplified view of firewall operation is shown in the following illustration:

Cc722918.datasc03(en-us,TechNet.10).gif

Firewalls operate at various layers of the OSI and TCPIP models. The lowest layer they can operate is layer three (The Network layer of OSI and IP layer of TCPIP models). A firewall IP layer typically looks like this:

Cc722918.datasc04(en-us,TechNet.10).gif

Commonly Used Configurations

Three Pronged Screened Subnet

A three-pronged firewall consists of one firewall with an interface assigned to the Internet, a second interface assigned to the private network, and a third interface assigned to the screened subnet. Traffic patterns are defined by firewall rules enforced at each of the three interfaces. The three-pronged firewall protects the private network by restricting all Internet traffic to the screened subnet.

To implement a three-pronged firewall, the firewall software must support multiple zone definitions. You must establish separate zones for the Internet, the private network, and the screened subnet.

Advantages

Disadvantages

Less cost because only one firewall computer is implemented.

Not all firewalls support this configuration.

Can be configured with multiple screened segments with different levels of access to the internal network.

If the firewall is compromised, all segments of the internal network will be exposed.

Cc722918.datasc05(en-us,TechNet.10).gif

MidGround-screened Subnet

A mid-ground screened subnet is an area of the network between two or more firewalls. One firewall acts as a barrier between the Internet and the screened subnet, with the second firewall acting as a boundary between the screened subnet and the internal network. A mid-ground screened subnet can provide additional security to the internal network by using two different brands of firewalls. This prevents the breach of the external firewall from gaining access to the internal network. If the external firewall is compromised, an external hacker must also breach the internal firewall to gain access to internal network resources. By using a different manufacturer, this will require that the hacker use different methods and toolsets than they used to compromise the exterior firewall.

Some advantages and disadvantages of implementing a mid-ground screened subnet are:

Advantages

Disadvantages

An attacker must circumvent two or more firewalls to access the internal network.

Extra expense additional firewalls.

Use of two different firewall brands can lessen the chance of successful penetration.

Additional configuration is required.

Cc722918.datasc06(en-us,TechNet.10).gif

When both techniques are used cohesively, an added level of protection can be established. The more complicated a firewall/DMZ environment is, the more difficult sustained penetration becomes. However, it also increases the probability of a misconfiguration and adds to the difficulty of sustained maintainability. Careful precautions should be undertaken and every possible avenue of penetration should be weighed against the compromise risk of the potentially exposed resource to be secured.

The Microsoft ISA Server

Microsoft's ISA Server ( https://www.microsoft.com/isaserver ) (Internet Security and Acceleration Server) adds a new type of security to today's firewall environment. Some features and benefits include:

Enterprise Security

Connecting networks and users to the Internet introduces security and productivity concerns. ISA Server features provide organizations a comprehensive means to control access and monitor usage. ISA Server protects networks from unauthorized access, inspects traffic, and alerts administrators to attacks.

  • Multi-layer firewall. Networks can be threatened in a variety of ways. Maximize network security with packet, circuit, and application-level traffic screening to reduce the risk of unauthorized access.

  • Smart application filters. ISA Server smart application filters recognize content and apply policy as the content traverses the network. Control application-specific traffic, such as electronic mail and streaming media, with data-aware filters to enhance security. ISA Server can take advantage of Active Directory™ policy-based management features.

  • Dynamic IP filtering. By restricting access to an as-needed basis and opening ports only for active sessions, ISA Server reduces the risk of external attacks.

Fast-Access Web Caching

The Internet offers organizations exciting productivity benefits, but only to the extent that content access is fast and cost effective. The ISA Server Web cache can minimize performance bottlenecks and save network bandwidth resources by serving up locally cached Web content.

  • High-performance Web cache. ISA Server accelerates Web access and saves network bandwidth, with ultra-fast RAM caching and efficient disk input/output (I/O).

  • Scalability. If not designed for scalability, caching can have a negative impact on content access as the volume of cached content increases. ISA Server was built with scaling in mind, providing efficient scaling and dynamic load balancing with its Cache Array Routing Protocol (CARP).

  • Active Caching. ISA Server offers greater network efficiencies the longer it is deployed. It studies usage patterns and suggests the best way to cache content. Through proactive caching of popular objects and scheduled download of entire sites, ISA Server provides the freshest content for each user.

Flexible Management

By providing an infrastructure for Web security and acceleration, ISA Server streamlines policy management and simplifies administration of internetworking.

  • Built for Windows 2000. ISA Server integrates with many of the core services in the Microsoft Windows 2000 ( https://www.microsoft.com/isaserver/techinfo/ ) operating system, providing a consistent and powerful way to manage user access, configuration, and rules. Core functionality includes authentication, management tools, bandwidth control, SecureNAT, and virtual private networking, ( https://www.microsoft.com/technet/prodtechnol/windows2000serv/deploy/confeat/vpnscen.mspx ) which build on Windows 2000 technologies. Using Active Directory service (https://www.microsoft.com/windows2000/guide/server/features/dirlist.asp), ISA Server simplifies management tasks.

  • Policy enforcement. Security policies are easier to implement and enforce with ISA Server. Network and security managers can control access by user and group, application, content type, and schedule. Enterprise-level management augments local (array-level) policy for company-wide rules and to support roaming users.

  • Rich management tools. Those tasked with security and network management need reliable, easy ways to monitor the network. ISA Server provides detailed logging and customizable alerts to keep managers apprised of network security and performance issues. Graphical reports and remote administration allow network professionals to easily examine the performance and use of the server.

Extensible Security

Security policies and imperatives vary from organization to organization. Traffic volume and content formats also pose unique concerns. Because no one product fits all security and performance needs, the ISA Server was built to be highly extensible. It offers a comprehensive SDK for in-house development, a large selection of third-party add-on solutions, and an extensible administration option.

  • Comprehensive SDK. ISA Server includes a comprehensive Software Developers Kit that includes full API documentation and step-by-step samples of filters and administration extensions that enable organizations to address specific security and performance concerns.

  • Large selection of third party-solutions. A growing number of third-party partners offer functionality that extends and customizes ISA Server, including virus scanning, management tools, content filtering, site blocking, real-time monitoring, and reporting.

  • Extensible administration. Cut down on administrative tasks by automating them. ISA Server includes scriptable Component Object Model (COM) objects for programmatic read/write access to all rules and administrative options.

ISA Server Features at a Glance

Feature

Benefit

Multi-layer firewall

Maximize security with packet-level, circuit-level, and application-level traffic screening.

High-performance Web cache

Provide users with accelerated Web access and save network bandwidth.

Windows 2000 integration

Manage ISA Server users, configuration, and rules with Windows 2000 Active Directory™ service. Authentication, management tools, and bandwidth control extend Windows 2000 technologies.

Stateful inspection

Examine data crossing the firewall in the context of its protocol and the state of the connection.

Scalability

Add servers to scale up your cache easily and efficiently with dynamic load balancing and the Cache Array Routing Protocol (CARP). Maximize network availability and efficient bandwidth use with distributed and hierarchical caching.

Virtual private networking

Provide standards-based secure remote access with the integrated Virtual Private Networking services of Windows 2000.

Detailed rules for managing traffic and enforcing policy

Control network and Internet access by user, group, application, content type, schedule, and destination.

Broad application support

Integrate with major Internet applications using dozens of predefined protocols.

Transparency for all clients

Compatibility with clients and application servers on all platforms, with no client software required.

Smart application filters

Control application-specific traffic, such as e-mail and streaming media, with data-aware filters that block only certain types of content.

Smart caching

Ensure the freshest content for each user through proactive caching of popular objects, and pre-load the cache with entire Web sites on a defined schedule.

Rich administration tools

Take advantage of powerful remote management capability, detailed logging, customizable alerts, and graphical task pads to simplify security and cache management.

Dynamic packet filtering

Reduce the risk of external attacks by opening ports only when needed.

Distributed and hierarchical caching

Maximize availability and save bandwidth for efficient network utilization, with multiple and backup routes.

Integrated bandwidth control

Prioritize bandwidth allocation by group, application, site, or content type.

Secure publishing

Protect Web servers and e-commerce applications from external attacks.

Efficient content distribution

Distribute and cache Web sites and e-commerce applications, bringing Web content closer to users, improving response times and cutting bandwidth costs.

Integrated intrusion detection

Identify common denial-of-service attacks such as port scanning, "WinNuke," and "Ping of Death."

Built-in reporting

Run scheduled standard reports on Web usage, application usage, network traffic patterns, and security.

System hardening

Secure the operating system with multiple levels of lockdown.

Streaming media support

Save bandwidth by splitting live media streams on the firewall.

Securing the Perimeter

Securing the perimeter of your organization requires a very detailed plan. The initial mapping of your environment is usually the best place to start. This is best done in the evaluation of three stages:

  • Externally available resources. Resources made available to outside users via the Internet.

  • Internally available resources. Resources made available to employees and contractors segmented/isolated from the Internet.

  • Virtual Private Networks. Internal resources made available privately and joined over a public network.

Cc722918.datasc07(en-us,TechNet.10).gif

The perimeter is the combination of the three general baselines above. Each baseline should, at a minimum, address the following items:

  • The mapping of physical and network routes between devices

  • Access rights to devices (both policy based with service level and confidentiality agreements if dealing with contracted vendors)

  • Length and duration of external exposure of resources

  • Acceptable level of risk for each resource

Avoiding Common Mistakes

Some of the more common mistakes in firewall configuration and techniques include:

  • Not tight enough. When the person configuring the firewall routes and policies does not know the particulars of an application, the easiest decision is to open too many ports. It is much more secure to allow nothing, and then open ports on an as needed basis. Proper documentation on applications and devices should always be provided in detail to the responsible party(s) configuring the firewall.

  • Inexperienced or improperly trained firewall administrators. An inexperienced or untrained firewall administrator is likely to make mistakes. A novice will undoubtedly ignore or miss some detail critical to maintaining integrity. Duties should be separated. Those who specialize in firewall products and techniques should help govern who has applicable rights to initiate/modify/remove firewall configurations and routes.

  • Misapplied or missing security patches. In many cases service packs and security patches are overlooked and neglected because of limited resources for testing and deploying. You should develop a migration or testing plan in advance to deal with these contingencies. Just as hackers stay up-to-date on firewall vulnerabilities, so should the people defending them.

  • Poorly planned access routes. Many times after a firewall has been set up, routes are added to accommodate a sudden business need. These are best planned in advance.

Access Control Mechanisms

Access Control mechanisms can be best grouped into the following three categories, with these subcategories at a minimum:

  • Physical security

    • Hardware locks

    • Security guards

    • UPSs and fire/surge suppression

    • Alarms and triggering devices

  • Administrative controls and policies

    • Separation of duties

    • Acceptable use and other policies/statements

    • Employee hire and termination procedures/guidelines

    • System/policy/control auditing and reporting

  • Technically based controls

    • Password enforcement

    • Intrusion detection and prevention technologies

    • Encryption/decryption technologies

    • File and OS based access control lists

As we have already covered some physical security and administrative controls and policies, let's cover a few technically based controls.

Proper Access Control Lists

Proper access control lists are essential to the security of any operating system. Without them, your systems would fall victim to the whims of anyone with network access to them. An access control matrix for each organization or resource will aid you greatly in determining the current state of your environment, if you don't already know it. Here is an example of such a matrix:

Resource person

A

B

C

D

E

F

G

H

I

J

Fred

W

R

R

F

R

R

N

W

N

X

Jane

R

R

R

X

X

R

W

X

F

X

Bob

R

X

X

X

X

R

W

F

X

X

Alice

R

W

F

X

W

X

N

R

N

X

Mary

R

W

W

W

W

F

R

R

W

X

Ken

F

F

X

R

W

N

X

W

R

F

Joan

X

W

W

W

X

R

F

N

R

X

Mark

X

X

W

W

F

X

R

N

X

W

Harry

F

N

R

X

R

W

X

W

W

R

Access Type: R=Read, W=Write, X=Execute, F=Full control, N=No Access

As you can see, the above matrix can be applied to nearly any resource or subject, not just file system access. By merely substituting groups for "person" and labeling the resources appropriately, you can create a custom matrix for any controllable resource. General assumptions about the owner(s) of the resource or their organizational roles can often be made at a glance.

The minimum recommended file system ACLs for Windows NT and some default installation settings for Windows 2000 are as follows:

Recommended Windows NT File System/Registry ACLs

(Note These should only be applied to a fresh install of Windows NT)

Directory

Permissions

\WINNT and all subdirectories under it

Administrators: Full Control
CREATOR OWNER: Full Control
Everyone: Read
SYSTEM: Full Control

Now, within the \WINNT tree, apply the following exceptions to the general security:

Directory

Permissions

\WINNT\REPAIR

Administrators: Full Control

\WINNT\SYSTEM32\CONFIG

Administrators: Full Control
CREATOR OWNER: Full Control
Everyone: List
SYSTEM: Full Control

\WINNT\SYSTEM32\SPOOL

Administrators: Full Control
CREATOR OWNER: Full Control
Everyone: Read
Power Users: Change
SYSTEM: Full Control

\WINNT\COOKIES
\WINNT\FORMS
\WINNT\HISTORY
\WINNT\OCCACHE
\WINNT\PROFILES
\WINNT\SENDTO
\WINNT\Temporary Internet Files

Administrators: Full Control
CREATOR OWNER: Full Control
Everyone: Special Directory Access—Read, Write, and Execute; Special File Access—None
System : Full Control

Several critical operating system files exist in the root directory of the system partition on Intel 80486 and Pentium-based systems. In high-security installations you might want to assign the following permissions to these files:

File

C2-level permissions

\Boot.ini, \Ntdetect.com, \Ntldr

Administrators: Full Control
SYSTEM: Full Control

\Autoexec.bat, \Config.sys

Everybody: Read
Administrators: Full Control
SYSTEM: Full Control

\TEMP directory

Administrators: Full Control
SYSTEM: Full Control
CREATOR OWNER: Full Control
Everyone: Special Directory Access—Read, Write, and Execute; Special File Access—None

Protecting the Registry (NT4.0)

In addition to the considerations for standard security, the administrator of a high-security installation might want to set protections on certain keys in the registry.

By default, protections are set on the various components of the registry that allow work to be done while providing standard-level security. For high-level security, you might want to assign access rights to specific registry keys. This should be done with caution, because programs that the users require to do their jobs often need to access certain keys on the users' behalf.

For each of the keys listed below, make the following change:

Access allowed

Everyone Group:

QueryValue, Enumerate Subkeys, Notify, and Read Control

In the HKEY_LOCAL_MACHINE on Local Machine dialog

\SOFTWARE

This change is recommended. It locks the system in terms of who can install software. Note that it is not recommended that the entire subtree be locked using this setting because that can render certain software unusable.

\SOFTWARE\Microsoft\Rpc (and its subkeys)

This locks the RPC services.

\SOFTWARE\Microsoft\Windows NT\CurrentVersion

\SOFTWARE\Microsoft\Windows NT\CurrentVersion\ProfileList

\SOFTWARE\Microsoft\Windows NT\CurrentVersion\AeDebug

\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Compatibility

\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Drivers

\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Embedding

\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Fonts

\SOFTWARE\Microsoft\Windows NT\CurrentVersion\FontSubstitutes

\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Font Drivers

\SOFTWARE\Microsoft\Windows NT\CurrentVersion\FontMapper

\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Font Cache

\SOFTWARE\Microsoft\Windows NT\CurrentVersion\GRE_Initialize

\SOFTWARE\Microsoft\Windows NT\CurrentVersion\MCI

\SOFTWARE\Microsoft\Windows NT\CurrentVersion\MCI Extensions

\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Perflib

Consider removing Everyone: Read access on this key. This allows remote users to see performance data on the machine. Instead you could give INTERACTIVE: Read Access which will allow only interactively logged on user access to this key, besides administrators and system.

\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Ports (and all subkeys)

\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Type 1 Installer

\SOFTWARE\Microsoft\Windows NT\CurrentVersion\WOW (and all subkeys)

\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Windows3.1MigrationStatus (and all subkeys)

\SYSTEM\CurrentControlSet\Services\Lanmanserver\Shares

\SYSTEM\CurrentControlSet\Services\UPS

Note that besides setting security on this key, it is also required that the command file (if any) associated with the UPS service is appropriately secured, allowing Administrators: Full Control, System: Full Control only.

**\**SOFTWARE\Microsoft\Windows\CurrentVersion\Run

\SOFTWARE\Microsoft\Windows\CurrentVersion\RunOnce

\SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall

In the HKEY_CLASSES_ROOT on Local Machine dialog

\HKEY_CLASSES_ROOT (and all subkeys)

In the HKEY_USERS on Local Machine dialog

\.DEFAULT

The Registry Editor supports remote access to the Windows NT registry. To restrict network access to the registry, use the Registry Editor to create the following registry key:

Hive:

HKEY_LOCAL_MACHINE

Key:

System\CurrentControlSet\Control\SecurePipeServers

Name:

\winreg

The security permissions set on this key define which users or groups can connect to the system for remote registry access. The default Windows NT Workstation installation does not define this key and does not restrict remote access to the registry. Windows NT Server permits only administrators remote access to most of the registry.

Some paths that need to be accessible by nonadministrators are specified in the HKEY_LOCAL_MACHINE \SYSTEM \CurrentControlSet \Control

SecurePipeServers\winreg\AllowedPaths key.

In the environments where members of server operators are not sufficiently trusted, it is recommended that security on following keys be changed as below:

Registry key

Recommended permissions

HKEY_LOCAL_MACHINE \Software \Microsoft \Windows NT\CurrentVersion\Winlogon

CREATOR OWNER: Full Control
Administrators: Full Control
SYSTEM: Full Control
Everyone: Read

Windows 2000 Default Settings

File System

The following table describes the default access control settings that are applied to file system objects for Power Users and Users during a clean install of Windows 2000 onto an NTFS partition. For directories, unless otherwise stated (in parentheses), the permissions apply to the directory, subdirectories, and files.

  • %systemdir% refers to %windir%\system32

  • *.* refers to the files (not directories) contained in a directory

  • RX means Read and Execute

File system object

Default Power User permissions

Default User permissions

C:\boot.ini

RX

None

C:\ntdetect.com

RX

None

C:\ntldr

RX

None

C:\ntbootdd.sys

RX

None

C:\autoexec.bat

Modify

RX

C:\config.sys

Modify

RX

\Program Files

Modify

RX

%windir%

Modify

RX

%windir%\*.*

RX

RX

%windir%\Config\*.*

RX

RX

%windir%\Cursors\*.*

RX

RX

%windir%\Temp

Modify

Synchronize, Traverse, Add File, Add Subdir

%windir%\repair

Modify

List

%windir%\addins

Modify (Dir\Subdirs)
RX (Files)

RX

%windir%\Connection Wizard

Modify (Dir\Subdirs)
RX (Files)

RX

%windir%\Fonts\*.*

RX

RX

%windir%\Help\*.*

RX

RX

%windir%\inf\*.*

RX

RX

%windir%\java

Modify (Dir\Subdirs)
RX (Files)

RX

%windir%\Media\*.*

RX

RX

%windir%\msagent

Modify (Dir\Subdirs)
RX (Files)

RX

%windir%\security

RX

RX

%windir%\Speech

Modify (Dir\Subdirs)
RX (Files)

RX

%windir%\system\*.*

Read, Execute

RX

%windir%\twain_32

Modify (Dir\Subdirs)
RX (Files)

RX

%windir%\Web

Modify (Dir\Subdirs)
RX (Files)

RX

%systemdir%

Modify

RX

%systemdir%\*.*

RX

RX

%systemdir%\config

List

List

%systemdir%\dhcp

RX

RX

%systemdir%\dllcache

None

None

%systemdir%\drivers

RX

RX

%systemdir%\CatRoot

Modify (Dir\Subdirs)
RX (Files)

RX

%systemdir%\ias

Modify (Dir\Subdirs)
RX (Files)

RX

%systemdir%\mui

Modify (Dir\Subdirs)
RX (Files)

RX

%systemdir%\os2\*.*

RX

RX

%systemdir%\os2\dll\*.*

RX

RX

%systemdir%\ras\*.*

RX

RX

%systemdir%\ShellExt

Modify (Dir\Subdirs)
RX (Files)

RX

%systemdir%\Viewers\*.*

RX

RX

%systemdir%\wbem

Modify (Dir\Subdirs)
RX (Files)

RX

%systemdir%\wbem\mof

Modify

RX

%UserProfile%

Full Control

Full Control

All Users

Modify

Read

All Users\Documents

Modify

Modify

All Users\Application Data

Modify

Modify

Note that a Power User can write new files into the following directories, but cannot modify the files that are installed there during text-mode setup. Furthermore, all other Power Users will inherit Modify permissions on files created in these directories.

  • %windir%

  • %windir%\Config

  • %windir%\Cursors

  • %windir%\Fonts

  • %windir%\Help

  • %windir%\inf

  • %windir%\Media

  • %windir%\system

  • %systemdir%

  • %systemdir%\os2

  • %systemdir%\os2\dll

  • %systemdir%\ras

  • %systemdir%\Viewers

For directories designated as [Modify (Dir/Subdirs) RX (Files)], Power Users can write new files; however, other Power Users will only be able to read those files.

Default Registry ACLs for Windows 2000

The following table describes the default access control settings that are applied to registry objects for Power Users and Users during a clean install of Windows 2000. For a given object, permissions apply to that object and all child objects unless the child object is also listed in the table.

Registry object

Default Power User permissions

Default User permissions

HKEY_LOCAL_MACHINE

 

 

\SOFTWARE

Modify

Read

\Classes\helpfile

Read

Read

\Classes\.hlp

Read

Read

\Microsoft

 

 

\Command Processor

Read

Read

\Cryptography\OID

Read

Read

\Cryptography\Providers\Trust

Read

Read

\Cryptography\Services

Read

Read

\Driver Signing

Read

Read

\EnterpriseCertificates

Read

Read

\Non-Driver Signing

Read

Read

\NetDDE

None

None

\Ole

Read

Read

\Rpc

Read

Read

\Secure

Read

Read

\SystemCertificates

Read

Read

\Windows\CV\RunOnce

Read

Read

\Windows NT\CurrentVersion

 

 

\DiskQuota

Read

Read

\Drivers32

Read

Read

\Font Drivers

Read

Read

\FontMapper

Read

Read

\Image File Execution Options

Read

Read

\IniFileMapping

Read

Read

\Perflib

Read (via Interactive)

Read (via Interactive)

\SecEdit

Read

Read

\Time Zones

Read

Read

\Windows

Read

Read

\Winlogon

Read

Read

\AsrCommands

Read

Read

\Classes

Read

Read

\Console

Read

Read

\EFS

Read

Read

\ProfileList

Read

Read

\Svchost

Read

Read

\Policies

Read

Read

\SYSTEM\CurrentControlSet

Read

Read

\Control\SecurePipeServers\winreg

None

None

\Control\Session Manager\Executive

Modify

Read

\Control\TimeZoneInformation

Modify

Read

\Control\WMI\Security

None

None

\HARDWARE

Read (via Everyone)

Read (via Everyone)

\SAM

Read (via Everyone)

Read (via Everyone)

\SECURITY

None

None

HKEY_USERS

 

 

\USERS\.DEFAULT

Read

Read

\USERS\.DEFAULT\SW\MS\NetDDE

None

None

HKEY_CURRENT_CONFIG

= HKEY_LOCAL_MACHINE \SYSTEM
CurrentControlSet\HardwareProfiles\Current

 

HKEY_CURRENT_USER

Full Control

Full Control

HKEY_CLASSES_ROOT

= Merge of HKEY_LOCAL_MACHINE \SOFTWARE \Classes + HKEY_CURRENT_USER \SOFTWARE \Classes

 

Encrypting File System

EFS provides the core file encryption technology to store Windows NT file system (NTFS) files encrypted on disk. EFS particularly addresses security concerns raised by tools available on other operating systems that allow users to access files from an NTFS volume without an access check. With EFS, data in NTFS files is encrypted on disk. The encryption technology used is public key–based and runs as an integrated system service making it easy to manage, difficult to attack, and transparent to the user. If a user attempting to access an encrypted NTFS file has the private key to that file, the user will be able to open the file and work with it transparently as a normal document. A user without the private key to the file is simply denied access.

  • EFS in Windows 2000 provides users the ability to encrypt NTFS directories using a strong public key–based cryptographic scheme whereby all files in the directories are encrypted. Individual file encryption, though supported, is not recommended because of unexpected behavior of applications.

  • EFS also supports encryption of remote files accessible via file shares. If users have roaming profiles, the same key and certificate may be used on certain trusted remote systems. On others, local profiles are created and local keys are used.

  • EFS provides enterprises the ability to set up data recovery policies such that data encrypted using EFS can be recovered when required.

  • The recovery policy is integrated with overall Windows 2000 security policy. Control of this policy may be delegated to individuals with recovery authority. Different recovery policies may be configured for different parts of the organization.

  • Data recovery in EFS is a contained operation. It only discloses the recovered data, not the individual user's key that was used to encrypt the file.

  • File encryption using EFS does not require users to decrypt and reencrypt the file on every use. Decryption and encryption happens transparently when files are read and written to disk.

  • EFS supports backup and restore of encrypted files without decryption. NTBackup supports backup of encrypted files.

  • EFS is integrated with the operating system such that it stops the leaking of key information to page files and ensures that all copies of an encrypted file, even if moved, are encrypted.

  • The North American version of EFS will use DESX as the file encryption algorithm with full 128-bit key entropy. The international version of EFS will also use DESX as the encryption algorithm; however, the file encryption key will be reduced to have only 40-bit key entropy.

  • Several protections are in place to ensure that data recovery is possible and there is no data loss in case of total system failures.

More information on Encrypting File System, its uses, configuration, and deployment can be found at:

https://www.microsoft.comhttps://msdn2.microsoft.com/en-us/library/ms995356

and

https://www.microsoft.com

Digital Forensics

Introduction

Digital Forensics is a long-used technology that is gaining widespread use and popularity within the IT community. It involves many things including gathering evidence of a break-in. Many different techniques from the forensic world can be applied to a modern technological environment to help determine integrity and authenticity. A few are covered in this paper.

The challenges inherent in the computer forensic world have increased with every release of a new operating system. In the beginning, the DOS based systems were relatively easy to understand and software tools were available to assist the forensic investigator in the processing of questioned disk media. As operating systems became more complex, the level of training required to understand these systems increased exponentially. Although the fundamental processes have remained somewhat standard, new technologies applied to cutting edge forensic tools have greatly aided the modern computer crime investigator.

As an example, in the early 1990s it took approximately one week of processing time to examine a 40 MB hard drive. The back up procedure was slow, and the documentation of the system was laborious. The examination process involved searching all clusters, sectors, slack, and other areas of interest on the disk for any and all available evidence. The examination process has evolved to a point where it is not unusual to examine 4GB and larger drives. Considering the amount of data that can potentially reside on a drive of this size, more efficient forensic tools are required. Today, through the use of the new Windows based forensic utilities and imaging tools, the back-up time has decreased, along with the ability to examine the hard drive architecture in a more timely and seamless manner.

Generally the evidence an investigator seeks resides on a word processing document, spreadsheet, or other file. Evidence may also reside on erased files, file slack, or even a Windows swap file, all of which are very volatile and easily changeable if not properly accessed. Merely turning on the questioned computer and activating the Windows GUI triggers processes that can alter, and even destroy data fragments that can potentially make an investigation a success or failure. There is also a possibility of activating a Trojan program a user left on the computer on purpose, which potentially could cause modification or destruction of the file structure. To ensure this does not happen, a mirror image of the questioned drive is created. A mirror image is a byte by byte, sector by sector, duplication of a hard drive, which should be authenticated by a Cyclical Redundancy Checksum (CRC) at the initial image and restore process. A CRC is mathematical computation that verifies the integrity of each block of data on a hard drive.

Common Tools and Terms: Their Meanings and Application to the Enterprise

File Signatures. Most files have unique signatures. The first few bytes denote the type of file, regardless of the operating system's application association. For example, the first six bytes of a GIF file may be GIF87A or GIF89A. In most cases this enables forensic software to establish the true type of a file, regardless of the extension used for the file.

Hashing. Hashing is also referred to as MD5 checksum. The most commonly used hash is the MD5 hash. An MD5 hash is a unique 128-bit number generated at compile time that signifies file integrity has not been compromised. For example, Windows 2000 Advanced Server's explorer.exe has an MD5 hash of 72 51 75 97 85 c6 0e d0 e3 d3 f8 37 9c 89 a0 79. The odds of obtaining the same MD5 sum from two different files are about 2128. A table of the most commonly trojaned or altered files and their valid checksums (for Windows 2000 Advanced Server) can be found here in the following table:

The most commonly altered files and their valid checksums

File name

Use

MD5 checksum

explorer.exe

Windows Explorer

72 51 75 97 85 c6 0e d0 e3 d3 f8 37 9c 89 a0 79

taskmgr.exe

Task manager

b2 e4 32 b3 4d cc bc 68 88 fa 3a aa 71 94 c5 c2

logon.scr

Logon Screen

79 80 b0 36 2c ce ec f2 55 72 e8 64 f9 7c 2b b1

cmd.exe

Command Prompt

53 fc da 64 f7 12 2b cb 4b 60 12 87 03 9a 80 75

rundll32.exe-*

Run a DLL as an App

1e d5 27 48 25 cd 1e eb be 10 2b 9f f7 c9 ec 31

* Special attention should be given to DL's listed in HKLM\Software\Microsoft\Windows\CurrentVersion\Run that run against this program

Windows 2000 has System File Protection which guards against alteration of most of these files, making it more difficult for an intruder to establish backdoors in the event of a compromise. It is generally a good idea to tripwire these files as a precaution. By following the basic security concept of running in the least privileged security context allowed for a given task, the extent of exposure can also be minimized.

File or Disk Slack. File slack is the space available from the end of a file to the end of a cluster. For example, if the file size is 200 bytes and the cluster size is 512 bytes, then the file slack would be 312 bytes. File slack also refers to unallocated space in a cluster. It is possible to hide files within the file slack.

Cc722918.datasc08(en-us,TechNet.10).gif

RAM Slack. The space from the end of a file to the containing sector is called the RAM Slack. Before a sector gets written to disk it's stored in a RAM buffer. If the sector is only partially filled before getting committed to disk, the information remnants in the end of the buffer are usually written to disk. From this information it is sometimes possible to recover never saved data.

An assortment of software tools can be used to create a mirror image of a questioned drive. I utilize two different methodologies in the image capture process. My preference includes the high-speed backup to a DLT drive. Or I capture the image in a proprietary format to another drive for later examination. I usually image a drive to DLT when in the field or at a location that requires several backups be made very quickly. These images are saved and can be restored in my lab at a later time, and can be restored as many times as necessary to conduct the examination. This also affords the opportunity to restore the image to a control drive. This drive is then placed within the original equipment. If imaged properly, the examiner has the ability to run the programs as originally installed in the questioned computer.

The second method involves making an image using a proprietary scheme. The image can easily be viewed at a later time, or if necessary, previewed at the scene. These images are captured as an unalterable file, allowing backup to a CD, or even to a share on a networked control computer.

After the questioned drive is restored, a routine virus scan is initiated, and the examination processes commences. Currently, an analysis is expedited by user-friendly interfaces from a variety of forensic software and utilities used to inspect text files, graphics, and hidden data area's, such as file slack, unallocated file space, and erased files. Depending on the issue, and the relevant facts surrounding the case, a word list is created and run against the file structure.

Preservation of Evidence

Another major consideration in the forensic process is the maintenance and documentation of the evidence flow as it is received and examined. Failure to develop a firm policy and procedure can potentially damage the chain of custody and cause an otherwise good examination to be inadmissible in a civil or criminal court proceeding. Also important is the actual storage of the media (both location and type), because damaged evidence is inadmissible evidence.

Other Informative Sources

Denial of Service

Business Continuity Planning

Firewalls

Digital Forensics

Books on Security

Associations and Consortiums