Microsoft Windows NT 4.0 and Windows 98 Threat Mitigation Guide

Archived content. No warranty is made as to technical accuracy. Content may contain URLs that were valid when originally published, but now link to sites or pages that no longer exist.

Chapter 3: Network Security and Hardening

Published: September 13, 2004 | Updated : March 30, 2006

Note: Welcome to the TechNet Archive. We've created this Archive area so that we can continue to make available older content that is still of interest to some of our users. This allows us to streamline the content offerings on the site and keep it focused on the newest, most relevant content.

On This Page

Network Security Design


This chapter describes network security vulnerabilities and the process of hardening hosts on the network against these vulnerabilities. It addresses network segmentation, Transmission Control Protocol/Internet Protocol (TCP/IP) stack hardening, and the use of personal firewalls for host protection.


Older systems are often a target of unwanted attention from attackers because their existence implies some elevated level of trust or interaction with internal applications—this is most likely why they have been retained. When this theoretical value is combined with a perceived vulnerability, older systems become extremely tempting and can be seen as a natural choice for further scrutiny.

When securing older systems, you must consider the place those systems inhabit within your entire environment. By paying attention to the design and configuration of the entire network, you can create logical points in it to restrict the amount of hostile traffic as much as possible before it reaches your older systems. These measures are in addition to the system-specific hardening measures that the subsequent chapters will explore.

Traditionally, the term perimeter network refers to an isolated network segment at the point where a corporate network meets the Internet. Services and servers that must interact with the external, unprotected Internet are placed in the perimeter network, also known as a DMZ, demilitarized zone, and screened subnet. This is so that if attackers are able to exploit vulnerabilities in exposed services, the attackers will only be able to take one step toward accessing the trusted interior network. One way to get stronger protection for your entire network is to treat your older systems in a similar manner to the way that you treat your perimeter network— put the older systems on their own network segments and isolate them from other hosts on the network. This approach has two benefits: It lowers the risk that a compromised older system will affect the rest of the network, and it enables more aggressive filtering and blocking of network traffic to and from the older computers.

Note   Microsoft recommends that you never expose Microsoft® Windows NT® version 4.0 or Microsoft Windows® 98 systems directly to the Internet, even by placing them in a perimeter network. These systems should be restricted to use on your internal network.

Network Security Considerations

You should protect the older systems in your environment as well as you treat your perimeter environment. Hardening and securing your network requires you to balance your business needs, budget restrictions, and the following security considerations, which subsequent sections cover in detail:

  • Defense in depth

  • Perimeter control

  • Bidirectional threats

  • Dissimilar services separation

  • Failure planning and incident response

  • Backups

  • Time synchronization

  • Auditing and monitoring

  • Informed awareness

Defense in Depth

To protect computer systems from today's threats, IT managers should consider a defense-in-depth strategy. A defense-in-depth strategy focuses on a combination of removing factors that increase risk and adding controls to decrease risk. No matter how good your software, hardware, processes, and personnel are, a highly determined attacker may be able to find a way through a single protective layer. The defense-in-depth security model protects key assets by using multiple layers of security throughout the environment to defend against intrusions and security threats. This multilayered approach to system security raises the effort required by an attacker to penetrate an information system, thus reducing the overall risk exposure and probability of compromise.

Rather than depending on only a strong perimeter defense or hardened servers, a defense-in-depth approach to security relies on the aggregation of multiple different defenses against a possible threat. Defense-in-depth does not reduce the need for any other security measures but instead builds on the combined strength of all of the components. Building security in overlapping layers has two key benefits:

  • It makes it more difficult for attacks to be successful. The more layers you have, the harder an attacker has to work to affect a successful penetration — and the greater the chances that you will be able to detect the attack in progress.

  • It helps mitigate the effect of new vulnerabilities in devices. Each layer protects against a different type of attack or provides duplicate coverage that does not suffer from the same weakness as another layer. As a result, many new attacks can be prevented by having a dependent transaction blocked by a still-intact defense measure, giving you time to address the core deficiencies.

Your business processes need to be adjusted to coordinate changes to multiple layers if they do not already permit this.

Network Segmentation

Perimeter networks are established to create a boundary that allows separation of traffic between internal and external networks. With this boundary in place, you can categorize, quarantine, and control your network traffic. The ideal theoretical perimeter network passes no traffic to the core system except the absolute minimum required to allow desired interactions. Every additional transaction that traverses the perimeter represents another potential hole in the defense, another possible vector through which attackers may reach for control. Each new service that is enabled increases the threat surface, providing another set of code that can produce vulnerabilities and openings.

Traditional security policy calls for defining a perimeter between hosts on your network that directly communicate with the Internet and those that do not. However, you can gain additional security by treating older systems as though they were part of your perimeter, tightly controlling intercommunication between your “normal” network and those segments that contain older systems. Relegating older systems to their own network segments offers two important benefits:

  • It allows you to treat the older systems as you would computers in the perimeter network. Because earlier versions of Windows do not include all of the security features and capabilities of newer versions, they are at greater risk of compromise than newer systems, and they need to be protected accordingly.

  • It provides better control over the older systems. By putting these systems into their own perimeter, you segregate them from the network perimeter controls (like firewalls) that allow careful monitoring and control of traffic passing between different networks.

Bidirectional Threats

Many attacks succeed because they provoke the target system into initiating contact outside the perimeter. This contact is engineered to be established with a hostile system, allowing it to reach back into the target. Another common scenario involves worms and viruses; after the system has been successfully compromised, the malware begins spreading from one system to another, exploiting trust relationships and disturbing external systems in an attempt to spread further. Again, these systems might also reach out to hostile systems to provide a back door for further activity. The perimeter must be designed to limit not only the traffic coming into the protected systems, but also the traffic coming out of the protected systems. This design makes it more difficult for your servers to be used against your environment in event of a compromise. But it also makes the attacker's job harder, because your own systems cannot be enlisted to help violate their own layers of defense.

Dissimilar Services Separation

Concerns about performance often make it tempting to install multiple services on one computer, in order to ensure that expensive hardware is put to full use. However, doing so indiscriminately can make it difficult to properly secure your system. Carefully analyze the services and traffic that your systems host and generate, and ensure that your perimeter measures are adequate to limit traffic to the needed combinations of services and remote systems. The reverse is also true: Grouping together similar systems and services and careful partitioning of the network can make it much easier to provide protection.

Failure Planning and Incident Response

The process of good security planning and implementation involves asking yourself, “What happens if this measure fails?” It is important to understand the consequences of mistakes, accidents, and other unforeseen events. Identifying them will allow you to design your defenses in such a manner as to mitigate those consequences and to ensure that one failure does not initiate a chain of events resulting in total exploitation of your older systems. For example, every organization should have a predefined plan that describes what to do during a virus or worm outbreak, as well as a plan that describes what to do when a compromise is suspected. In most organizations, incident response teams need to include IT staff, the legal department, and business management; these stakeholders all need to participate in carrying out a coherent response to security breaches. The Microsoft Security Guidance Kit, available at, includes information on how to set up and execute your own incident response plans.


If an attacker is successful in entering your systems, his or her victory can be temporary if you are able to prevent him or her from successfully comprising your key resources and data. A successful backup and restore procedure helps ensure that even if the worst happens, you still have the data you need to rebuild and recover. Ensuring that your data is backed up is just the first step, however. You need to be able to quickly rebuild any compromised or affected system, and if you need to perform forensic analysis on the original hardware (often for purposes of documenting insurance claims or identifying the attack vector), you might need to have spare hardware and software available, as well as tested setup procedures.

Time Synchronization

The various clues for spotting an attack can be scattered across multiple systems, especially in a perimeter network. Without some way of correlating this data, you might never spot them and put them together. All your systems should have the same clock time to assist in this process. The net time command allows workstations and servers to synchronize their time with their domain controllers. Third-party Network Time Protocol (NTP) implementations allow your servers to share time synchronization with other operating systems and network hardware, providing a unified time base across your network.

Auditing and Monitoring

No matter how good your system defenses are, you still must audit and monitor them regularly. It is crucial that you know what your normal traffic patterns — and attacks and responses — look like. If you develop this sense, you will have indications when something negative happens, because your network traffic rhythms will change. A key area to audit and monitor is authentication. A sudden string of failed authentication attempts can often be your only warning that your system is under a brute-force dictionary attack. A brute-force dictionary attack uses known words or alphanumeric character strings to break simple passwords. Likewise, out-of-pattern authentication successes are a possible indication that your systems have been compromised on at least some level and that an attacker is attempting to leverage the initial exploit into full system access. Regular collection and archiving of event logs, combined with automated and manual analysis, makes a critical difference between failed and successful penetration attempts, in many situations. Automated tools like Microsoft Operations Manager (MOM) make it easier to monitor and analyze logged information.

Informed Awareness

You cannot know everything, but you can stay alert and aware of the sorts of threats that other administrators are seeing. There are several excellent security resources that are dedicated to providing up-to-date information on current security threats and issues. These resources are listed in the “More Information” section at the end of this chapter.

Network Security Design

There are several specific measures that you can use to harden your network against internal and external attacks. These measures include to:

  • Segregate your older systems into their own perimeter-like network segments and protect them with access control rules, firewalls, and other techniques.

  • Deploy firewalls, whether at the network level — through network hardware devices, Microsoft Internet Security and Acceleration (ISA) Server, or other products or on individual workstations.

  • Harden the TCP/IP stack by applying more restrictive settings on how the stack processes anomalous packets.

  • Use the port and packet filtering features built in to Windows NT 4.0 to provide additional security.  


The following prerequisites are necessary for this solution:

  • Possession of unused network subnets for network segmentation, of sufficient size to contain all of the necessary hosts and network overhead required.

  • An understanding of TCP/IP, the ports used within your network, packet filtering, and routing.

  • A thorough knowledge of the various TCP/IP options and characteristics of the other devices with which your older systems interact.  


Several security considerations apply when designing the network architecture.

Network Segmentation

Network segmentation controls the flow of traffic between hosts on different segments of a network. A segmented network, when properly designed, improves performance and security by ensuring that only appropriate traffic is forwarded between segments within the network. Moving from hubs to switches can minimize the capability of a hacker to sniff the network for password and other sensitive traffic, but switches do not eliminate the possibility altogether. A compromised system connected to a switch can still be used to gather information from other systems. For that reason, you should consider switches to be an answer to network collisions and performance, not network security.

Port and packet filtering, used with personal firewalls, can also help protect older systems from intrusion and compromise. However, in some situations it is not practical to install and manage firewall software on users' computers because of the administrative overhead. Unless you install firewall software that you can manage and configure remotely through a central server and database, an administrator will need to touch each computer more times after rollout to modify the configuration to address individual user needs. You also increase the likelihood that you will be required to complete additional administrative tasks for each new application you deploy, in response to the additional vulnerabilities and points of failure that the application may introduce.

Some organizations do not have the staff necessary to manage potentially hundreds of personal firewalls. In these situations, organizations can turn to network segmentation as an alternate or additional security mechanism to further protect the network. As discussed previously, network segmentation means quarantining certain servers in a perimeter network. It can also mean dividing the network into discrete segments to provide additional levels of protection for systems that reside in each segment. Segmentation can also provide considerable flexibility for traffic shaping, port monitoring and filtering, and other network management tasks, because each segment can potentially have its own discrete configuration that fits the needs of the users in the group while also suiting the security needs of the network. In effect, network segmentation brings the firewall to the workgroup level where it can be used in conjunction with the perimeter firewall(s) to further secure the network overall.

Network segmentation addresses two potential threats: those that come from outside the network, and those that come from within. The classic case of a clever employee in the Engineering department who finds a way into the Human Resources department's file server is a good example of a situation in which network segmentation would provide benefit on the local area network (LAN). The firewall sitting at the head of the Human Resources department's segment screens traffic to prevent access from computers in unauthorized departments. Likewise, the Engineering department's segment should be protected from other segments.

Network segmentation lets you structure the network into discrete security zones with the capability for unique, rule-based traffic management for each segment. You can segment the network in several ways. For example, you might choose to deploy a hardware-based firewall at each segment and physically segment the network. Or, you can deploy a single, centralized firewall with virtual LAN/segmentation capability that serves to protect individual groups. The solution that you choose ultimately depends on the network topology and the security needs of each group. At the very least you should isolate those segments that pose the most risk, such as wireless networks, from the rest of the network and impose aggressive rules to prevent unauthorized traffic to and from those segments.

To determine the best network segmentation solution for your network, start by reviewing the network structure. Then you can identify the segments that pose the greatest risk and start to build a solution. It is likely that your existing firewall vendor can offer technical information and products to help you deploy a solution.

Trey Research chose to segment the older systems in its headquarters office by putting its Windows NT 4.0 servers and Windows 98 clients on a separate network segment, and then placing a firewall between that segment and the rest of the corporate network. By enabling the use of network address translation (NAT) on that firewall, it becomes easy for Trey’s engineers to block or filter inbound and outbound traffic to that segment, giving them additional defensive capability.

Personal Firewalls

Although the protections and measures built into the TCP/IP stack of Windows NT are a start, they have significant limitations that make them unsuitable for many deployments. They also do nothing to protect Windows 98 clients. Software firewalls, also called personal firewalls, can often provide additional protection. These specialized applications sit on top of the network stack to intercept network activity, categorize it against their configured database of permitted traffic, and allow or deny the attempt.

The big advantage that personal firewalls provide is that they can be specifically tuned to the traffic patterns of each individual computer. One possible disadvantage, however, is that because they are an application, they can be accessed and interfered with more easily, whether by accident or malicious intent.

Still, personal firewalls add an extra layer of security to older servers and clients and provide important capabilities that older systems lack, such as the ability to restrict traffic on particular ports to specified hosts and reduce the threat surface exposed by required services. Depending on where the firewall hooks into the networking stack, it can block hostile traffic before reaching vulnerabilities in the operating system or listening applications.

Personal firewalls also help limit the damage that results from Trojans, viruses, and worms. Such malware often initiates outbound traffic as well as listening to ports for illicit connections. This traffic has multiple purposes, ranging from relaying spam (both to internal and external messaging systems), to scanning other hosts and networks for vulnerabilities and openings, and it wastes disk space, processor cycles, memory, and network bandwidth. Such applications and their connection initiation often cause Denial- of-Service (DoS) periods as a secondary effect in addition to their infection and clean-up problems.

In addition to expanded ingress and egress filtering capabilities, many personal firewalls can also allow or deny network access based on the executable that requests it. This functionality can be used to block specific applications — or only allow pre-approved applications — from ever accessing the network, regardless of what ports and protocols they use. This prevents users from circumventing security configurations with protocol-agile applications such as peer-to-peer file sharing, instant messaging, or other applications that tunnel their connections through Hypertext Transfer Protocol (HTTP).

Personal firewalls can require a bit of time to set up and configure properly because they scrutinize literally every transaction, at least until a transaction is properly categorized and classified as allowed or denied. They require detailed knowledge of every bit of network traffic generated by authorized applications and services, because a single blocked interaction could subtly cripple the execution of necessary programs. They also require maintenance, patching, and updating. Look for personal firewalls that have features intended for corporate and enterprise use, such as the ability to centrally manage and maintain configuration databases.

The ISA Firewall Client, used in conjunction with ISA Server, provides more sophisticated filtering and policy enforcement capabilities; it allows traffic control based on the user’s identity, as well as the origin or destination of the traffic. All traffic can be monitored and controlled through strategically located servers, and the enterprise-wide policies can be updated easily and quickly.

The Routing and Remote Access Service download for Windows NT is another option. Although RRAS mainly provides support for dynamic routing protocols such as the Routing Information Protocol (RIP) and dial-up and virtual private networking (VPN) capabilities, it also gives the ability to define access control on incoming network traffic above and beyond the basic port filtering built in to Windows NT. RRAS is a free download, available from the Routing and Remote Access Service Download page at

Using RRAS or any add-on solution has one significant disadvantage, however. Add-ons are software subsystems, running as Windows services, and as such are started after the network interfaces and protocols are initialized. Thus, there is a window of opportunity during which hostile traffic can slip through during a reboot or service outage. The use of personal firewalls, the ISA Server Firewall Client, or RRAS without other forms and layers of protection will not provide absolute security. As always, multiple defenses, used to offset and counteract weaknesses in each layer, provide the most superior protection.

Trey Research is already using the Internet Connection Firewall (ICF), part of Windows XP, on some of its systems. Trey chose to purchase a license for, and deploy, a third-party personal firewall product for its computers running Windows 98 and Windows NT Workstation 4.0. This provides both ingress and egress filtering for internal traffic on the company's segregated network, and it provides centralized data collection that helps Trey maintain visibility into network traffic type and volume.

Windows NT Port and Packet Filters

One of the best methods of protecting networked computers is to limit what types of network traffic they receive and process, which usually requires some sort of packet filter. Most administrators think of routers and network chokepoints when they design packet-filtering strategies. However, Windows NT comes with a basic packet filtering capability, known as TCP/IP security. Although this facility does not provide sufficient protection by itself, it does make an excellent secondary layer of defense when used in conjunction with stateful packet-filtering devices.

The main advantage of the Windows NT built-in TCP/IP security features is that they are implemented within the TCP/IP network stack as an integral part of the protocol drivers. The benefit of this depth of integration is that all of the settings are always active as long as the protected interfaces are active; there is never any window of time, such as start-up, during which network traffic is not being filtered. TCP/IP security is transparent to applications, although it can interfere with some personal firewall software.

Despite its name, TCP/IP security allows port-by-port filtering of TCP and User Datagram Protocol (UDP), as well as other IP protocols. Active filters block inbound traffic but permit outbound traffic and responses to TCP connections initiated by the local host. There are some limitations, however:

  • Port filters either allow or block traffic from all hosts. You cannot establish any finer level of granular control, as is possible with the IP Security (IPsec) Extensions capability built into Windows 2000, Windows XP, and Microsoft Windows Server™ 2003.

  • The filters are not truly stateful and cannot be linked dynamically to allow traffic for secondary connections. The IPsec implementation in later versions of Windows provides for allowing secondary connections, as do most hardware firewalls.  

In addition to the simple port filtering capability just described, the Windows NT TCP/IP stack provides many tunable parameters that are especially interesting for closing or mitigating threatening network traffic. Over time, a variety of attacks have been developed that exploit flaws in the Windows NT 4.0 TCP/IP networking code; even though all of these flaws have been addressed by service packs and security updates, it may still be valuable to apply these changes to give your network additional protection. These adjustments usually require direct editing of the registry by using the regedit32 or regedit tools. Microsoft Knowledge Base (KB) article 120642, “TCP/IP and NBT Configuration Parameters for Windows 2000 or Windows NT” at provides a long list of the tunable parameters; the ones that are most pertinent to network and system security are discussed in the following sections.

Trey Research has defined port and packet filtering on its older hosts to disallow traffic from ports not listed in KB article 150543, "Windows NT, Terminal Server, and Microsoft Exchange Services Use TCP/IP Ports" at or used by other applications on its network from leaving their internal network and traveling to the Internet. This prevents sensitive network traffic from being broadcast on an uncontrolled network.

SYN Flooding Protection

SYN (synchronization) flooding is a common vector of attacks against TCP services. When a TCP client initiates a new connection, it sends an empty TCP packet to the listening server with the SYN flag set, indicating that it is requesting a new connection. The server sends back a response packet with both the SYN and ACK (Acknowledgment) flags. The client then responds with an ACK packet, completing the three-way handshake, and the connection is then open.

Until the final acknowledgment is received, the TCP/IP driver assigns this connection the SYN_RCVD (SYN Received) state. If for some reason Windows does not receive a response to the SYN+ACK packet, it will wait, by default, for one second and then retransmit the SYN+ACK packet. A second retransmission will occur after another timeout of three seconds, with a final retransmission occurring after another six seconds. Each connection request requires the server to allocate a certain amount of memory and other kernel structures; a flood of incoming requests can rapidly exhaust resources and cause a DoS. The applicable registry keys are the following:

Tcpip\Parameters\SynAttackProtect (REG_DWORD):

  • 0 = Disabled (default)

  • 1 = Delay creation of the route cache entry until connection is established

  • 2 = Delay notifying the Winsock driver until the three-way handshake is complete (recommended)  

Tcpip\Parameters\TcpMaxHalfOpen (REG_DWORD):

  • This key defines the maximum number of connections that the IP stack will allow to be in the SYN_RCVD state before triggering the SynAttack state. The default value is 100.  

Parameters\TCPMaxHalfOpenRetried (REG_DWORD):

  • This key defines the maximum number of connections that are both in the SYN_RCVD state and have been retransmitted more than once before SynAttack triggered. The default value is 80.  

Parameters\TCPMaxPortsExhausted (REG_DWORD):

  • This key defines the number of refused connect requests caused by having no backlog before the SynAttack is triggered. The default value is 5.  

TCPMaxConnectResponseRetransmissions (REG_DWORD):

  • This key defines the number of retransmission attempts during the SYN_RCDV state. The default value is 3, which causes all three retransmissions. Setting this value to 1 will cause only one retransmission attempt.

The KB article 146241, “Internet Server Unavailable Because of Malicious SYN Attacks” at discusses these settings in more detail.

On older hosts, Trey Research has tightened the settings of all of the registry keys listed previously, reducing the chance that a SYN attack will negatively impact those systems. Trey Research did not use the most restrictive settings, because there is a legitimate need for regular SYN traffic to traverse the network, and it does not want to interrupt the traversal of normal network traffic. Trey's system administrators understand that the changes they made to the default settings may need to be revisited and are proactively monitoring the amount of SYN traffic on their network.

Backlog Size Control

Applications that make use of TCP/IP use the Winsock application programming interface (API), which is provided by afd.sys, the Winsock kernel mode driver. The Winsock API provides the mechanisms for applications to open client connections and establish listeners on ports for server connections. The listen() function is used to tell Winsock to listen for connection attempts to a specific port and forward them to the application.

One parameter that this function must specify is the backlog size, which is a queue that holds pending incoming connections until the stack can handle them. The backlog provides a maximum queue length and by default is set to 200 on Windows NT Server and to 5 on Windows NT Workstation. Windows NT 4.0 Service Pack 2 (SP2) introduces the dynamic backlog feature, which permits the TCP/IP stack to adjust the size of the backlog queue as needed to respond to current network conditions.

This feature that adjusts the backlog queue is turned off by default and must be enabled by using the following registry entries. In addition, the calling application must request a backlog queue larger than the MinimumDynamicBacklog parameter in order to benefit from this feature. The applicable registry keys are the following:

Parameters\EnableDynamicBacklog (REG_DWORD):

  • 0 = Disabled (default)

  • 1 = Enabled (recommended)  

AFD\Parameters\MinimumDynamicBacklog (REG_DWORD):

  • This key defines the minimum number of entries in the backlog queue; if the number of available entries drops below this minimum, you need to create more entries. 20 is the recommended value.  

AFD\Parameters\MaximumDynamicBacklog (REG_DWORD):

  • This key defines the maximum number of entries that can be created in the backlog queue. Setting this number higher than 5,000 per 32 megabytes (MB) of system random access memory (RAM) can lead to memory exhaustion under attack. Remember that a separate backlog queue is created for each network service. Because Trey Research’s target servers have 512 MB of system RAM, the formula used to determine the upper limit of this registry value is: (512/32)*5000=80000.

  • For workstations (all of which have 256 MB of RAM), the calculated value is: (256/32)*5000 = 40000.  

Parameters\DynamicBacklogGrowth-Delta (REG_DWORD):

  • This key defines the number of new connections to add to backlog at one time when more are required. 10 is the recommended value.  

The KB article 146241, “Internet Server Unavailable Because of Malicious SYN Attacks” at discusses these settings in more detail.

Trey Research elected to set the recommended values for all registry keys listed previously in order to control the number of incoming connections so that the company's systems do not become overwhelmed by incoming requests.

TCP Keep-Alive Timers

TCP keep-alive timers are an advanced TCP feature that keeps idle connections alive. This function becomes especially important when those connections pass through firewalls and Network Address Translation (NAT) devices, which regularly purge aged entries from their connection and masquerading tables.

Windows NT provides the capability to enable and define TCP keep-alive timeouts. Lowering this value helps prevent dead connections from keeping resources in use for too long. To set this value, use the following registry keys:

Tcpip\Parameters\KeepAliveTime (REG_DWORD):

  • This key defines the number of milliseconds between keep-alive checks; by default this value is 7,200,000 (two hours).  

Tcpip\Parameters\KeepAliveInterval (REG_DWORD):

  • This key defines the number of milliseconds between keep-alive packet retransmissions if a response is not received; by default this value is 1000 (one second).  

Trey Research did not alter its TCP keep-alive timer settings, because they have not, to date, experienced any problems with the amount of time that connections are kept in use.

Path Maximum Transmission Unit Discovery

Path Maximum Transmission Unit (MTU) discovery is a feature that allows Windows to automatically discover the largest supported packet size on all network segments between hosts. It works by sending out large packets that have the “do not fragment” bit set. When an intervening router cannot forward this packet because it is larger than the MTU for the segment, it returns an Internet Control Message Protocol (ICMP) message. Windows then reduces the packet size and tries again until the packet is sent through to the destination.

By setting the proper MTU for remote hosts, Windows avoids generating fragmented packets, which reduce performance and increase the chance of lost data and retransmissions. Fragmented packets can be a security risk as well; fragmented packet handlers are ripe source of buffer overflows, and the ability to filter out fragments at the border can reduce a lot of attacks. Path MTU Discovery should be left enabled but will require ICMP type 3, code 4 to be routed through the firewalls. Use the following registry keys to set this value:

Tcpip\Parameters\EnablePathMTUDiscovery (REG_DWORD):

  • 1 = Enabled (default, recommended)

  • 0 = Disabled  

A value of 0 sets the MTU size to 576 bytes for all traffic outside of configured local subnets. Additionally, with this setting, Windows will not honor requests to change the MTU.

Tcpip\Parameters\EnablePathMTUBHDetect (REG_DWORD):

  • 0 = Disabled (default)

  • 1 = Enabled  

A value of 1 allows Windows to attempt detection of black hole routers during Path MTU Discovery; these routers silently discard packets with the “do not fragment” flag set if they are too large, instead of sending the correct ICMP reply. Enabling this value will cause more retransmission attempts.

Trey Research did not alter the default values for path MTU discovery because disabling this feature might result in some remote systems becoming unreachable. Trey Research does not want to interrupt business transactions in cases where systems within the communication path cannot support reducing the MTU size.

Source Routing

Source routing allows applications to override the routing tables and specify one or more intermediate destinations for outgoing datagrams. Although this capability is marginally useful for troubleshooting, it is extremely unwise to use it on modern production networks. Successful attackers can use this feature to transparently direct all network traffic to a centralized collection point for packet capture. Disable source routing by using the following registry key; additionally, ensure that your border routers are configured to drop all IP datagrams with the source routing option set:

Tcpip\Parameters\DisableIPSourceRouting (REG_DWORD):

  • 0 = Enabled (default)

  • 1 = Disabled when IP forwarding is enabled

  • 2 = Disabled completely (recommended)  

Trey Research disabled source routing on all hosts within its control to help prevent the capture of network data by attackers.

Dead Gateway Detection

Dead Gateway Detection allows Windows to detect when a default gateway appears to have stopped responding and fail over to additional configured default gateways. In practice, this capability is rarely used and provides an opportunity for DoS attacks. Use the following registry key to control this function:

Tcpip\Parameters\EnableDeadGWDetect (REG_DWORD):

  • 0 = Disabled (recommended)

  • 1 = Enabled (default)  

Trey Research disabled dead gateway detection on all hosts to help alleviate the possibility of DoS attacks on its network.

Router Discovery

Router discovery uses the ICMP Router Discovery messages to locate and configure default routes and gateways. Again, attackers can use this ability to redirect network traffic for various purposes, including sniffing and man-in-the-middle attacks. Routers that support this feature must send the ICMP Internet Router Discovery Protocol (IRDP) message, which should be disabled. In additional, Dynamic Host Configuration Protocol (DHCP) must be configured with the appropriate option so that the interface will accept the IRDP messages.

The following registry key should be edited to prevent malicious configuration changes; if the capability to use multiple routes is necessary, consider deploying the RRAS add-on and use a securable routing protocol.

Tcpip\Parameters\PerformRouterDiscovery (REG_DWORD):

  • 0 = Disabled (recommended)

  • 1 = Enabled  

Trey Research disabled router discovery on all hosts to prevent unapproved configuration changes within its network.

ICMP Redirects

ICMP redirects are another potential vulnerability because they allow an arbitrary sender to forge packets and alter the victim’s routing table. This feature is enabled by default and should be disabled on the following key to avoid malicious effects:

Tcpip\Parameters\EnableICMPRedirect (REG_DWORD):

  • 0 = Disabled (recommended)

  • 1 = Enabled (default)  

Trey Research disabled ICMP redirects on all hosts to protect the routing table on each host.

RPC Port Closures

Many services by default are listening on network interfaces, including the local loopback interface. For example, the Microsoft remote procedure call (RPC) port mapper listens on TCP/135, UDP/135, TCP/1027, and TCP/1028. Three of these services — the RPC client, RPC server, and RPC end-point mapper — can be configured to close all open ports. However, these changes must be carefully tested because they can break functionality, not only with remote hosts, but also between local services.

Microsoft Exchange and Microsoft SQL Server™ are the most commonly deployed applications that require RPC. Additionally, RPC calls are used during remote management of servers. The common built-in utilities that are dependent on RPC services are:

  • DHCP Manager

  • DNS Administrator

  • WINS Manager

  • Performance Monitor

  • Event Viewer

  • Registry Editor

  • Server Manager

  • User Manager  

To determine whether RPC is being used by an application, install Network Monitor Tools and Agent from the Windows NT 4.0 CD and use it to assess whether the application is using RPCs on the ports specified previously.

Remove the following registry keys to disable the RPC client:



Remove the following registry keys to disable the RPC server:



The RPC end-point mapper (rpcss.exe) opens multiple ports if the RPC server is enabled, but it can alternatively be configured to reject non-local Distributed Component Object Model (DCOM) connection attempts. Be aware that this will have a significant impact on your ability to remotely manage your systems; the Directory Services client extension described in Chapter 4, "Hardening Microsoft Windows NT 4.0," will be particularly affected, because it relies heavily on the Windows Management Instrumentation (WMI) providers, which are DCOM objects. You can disable non-local DCOM connections by using the following registry key:


  • Set the value to “N” to disable remote DCOM connections.  

Trey Research did not alter the settings for RPC ports on its internal hosts, because they need these ports active in order to use numerous network services, and closing RPC hosts would have left several applications inaccessible to users.



For these implementation details to work correctly, you must have the basic Trey Research infrastructure implemented as introduced in Chapter 2, "Applying the Security Risk Management Discipline to the Trey Research Scenario."


Implementing this solution scenario will involve performing the following activities:

  • Configuring native Windows NT 4.0 port filtering

  • Configuring Windows NT 4.0 IP tuning parameters

Registry files for most of these settings can be found in the Tools and Templates that are included as part of this guidance.  

These files are REGEDIT4 formatted files that can be used as a template for settings that you apply manually by using Registry Editor on the relevant computer. They can also be applied directly to a specific computer simply by double-clicking them. The files include all the registry keys, subkeys, and values required to secure the computer accordingly.

Caution   Double-clicking a .reg file automatically makes alterations to your registry after confirmation. Where needed, you can edit the .reg file with Notepad, for example, if you need to change a path or string value.

Configuring Native Windows NT 4.0 Port Filtering

Prior to enabling TCP/IP port filtering on your Windows NT servers, you must know exactly which traffic is necessary for the proper operation of that server in its given role. Microsoft provides a base list of well-known ports that Windows NT 4.0 uses; it is available in "Appendix B: Port Reference for MS TCP/IP" at

Other applications should have their own documentation available. For Windows Server systems, you can consult KB article 832017, "Service Overview and Network Port Requirements for the Windows Server System" at For other applications not included in these lists, either consult the application vendor’s documentation or use network traffic monitoring tools such as Netmon.exe (which comes as a component of Windows NT 4.0), Netstat (a built-in Windows NT utility that shows which ports are currently in use), or TCPView (a free tool available from Sysinternals at to verify the ports used by the application.

Configuring Windows NT port filtering is a simple process using the following procedure.

To configure native Windows NT port filtering

  1. To open the Network Control Panel click Start, select Settings, click Control Panel, and then double-click Network.

  2. On the Protocols tab, select TCP/IP Protocol, and then click Properties.

  3. On the IP Address tab, click Advanced.

  4. In the Advanced IP Addressing dialog box, select the Enable Security check box.

  5. Click Configure.

  6. In the TCP/IP Security dialog box, pick the relevant adapter (if the server is multi-homed).

  7. By default, no filters are defined, and all TCP, UDP, and IP traffic is permitted (see Figure 3.1). To enable filters, select the Permit Only check box on TCP, UDP, or IP Protocols, and add the port and protocol numbers on which you want to allow traffic on. Be sure to allow for basic infrastructure services.


    Figure 3.1 Configuring native Windows NT port filtering

Configuring Windows NT 4.0 IP Tuning Parameters

Configuring the Windows NT IP tuning parameters is a matter of editing the following registry parameters.

Caution   Changing the parameters used to tune the TCP/IP stack changes the way that the network subsystem communicates with routers, switches, and other computers on the network. These changes may impact the performance or stability of production applications. Before making any changes to the production environment, you should thoroughly test your proposed changes in a lab environment that replicates the behavior and configuration of your production servers and clients.

To configure the Windows NT IP tuning parameters for servers

  1. Run Registry Editor (Regedt32.exe).

  2. Ensure that the following registry entries (listed in TCP_Params.reg in the Tools and Templates folder) are applied to the HKLM\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters key:

    • SynAttackProtect (REG_DWORD) = 2

    • TcpMaxHalfOpen (REG_DWORD) = 100

    • TCPMaxHalfOpenRetried (REG_DWORD) = 80

    • TCPMaxPortsExhausted (REG_DWORD) = 5

    • TCPMaxConnectResponseRetransmissions (REG_DWORD) = 1

    • KeepAliveTime (REG_DWORD) = 7200000

    • KeepAliveInterval (REG_DWORD) = 1000

    • EnablePathMTUDiscovery (REG_DWORD) = 1

    • EnablePathMTUBHDetect (REG_DWORD) = 0

    • DisableIPSourceRouting (REG_DWORD) = 2

    • EnableDeadGWDetect (REG_DWORD) = 0

    • PerformRouterDiscovery (REG_DWORD) = 0

    • EnableICMPRedirect (REG_DWORD) = 0  

  3. Ensure that the following registry entries (listed in AFD_Params.reg in the Tools and Templates folder) are applied to the HKLM\SYSTEM\CurrentControlSet\Services\AFD\Parameters key:

    • EnableDynamicBacklog (REG_DWORD) = 1

    • MinimumDynamicBacklog (REG_DWORD) = 20

    • MaximumDynamicBacklog (REG_DWORD) = 144000

    • DynamicBacklogGrowth-Delta (REG_DWORD) = 10

  4. Ensure that the following registry entries (listed in DisableDCOM.reg in the Tools and Templates folder) are applied to the HKLM\SOFTWARE\Microsoft\OLE key:

    • EnableDCOM (REG_SZ) = N

    Caution   Disabling DCOM will disable several basic remote management capabilities and other applications, such as the Directory Services client extensions, WMI, the ability to remotely manage print and file shares, and more. Carefully test this change in your lab before you apply it to production servers.

  5. If you are certain that no RPC-based services, applications, or utilities are in use, delete the following subkeys:

    • HKLM\SOFTWARE\Microsoft\RPC\ClientProtocols\ncacn_ip_tcp

    • HKLM\SOFTWARE\Microsoft\RPC\ClientProtocols\ncacn_ip_udp

    • HKLM\SOFTWARE\Microsoft\RPC\ServerProtocols\ncacn_ip_tcp

    • HKLM\SOFTWARE\Microsoft\RPC\ServerProtocols\ncacn_ip_udp

  6. Exit Registry Editor.

To configure the Windows NT IP tuning parameters for workstations

  1. Run Registry Editor (Regedt32.exe).

  2. Ensure that the following registry entries (listed in NT4WS_TCP_Params.reg in the Tools and Templates folder) are applied to the HKLM\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters key:

    • SynAttackProtect (REG_DWORD) = 2

    • TcpMaxHalfOpen (REG_DWORD) = 100

    • TCPMaxHalfOpenRetried (REG_DWORD) = 80

    • TCPMaxPortsExhausted (REG_DWORD) = 5

    • TCPMaxConnectResponseRetransmissions (REG_DWORD) = 1

    • KeepAliveTime (REG_DWORD) = 3600000

    • KeepAliveInterval (REG_DWORD) = 1000

    • EnablePathMTUDiscovery (REG_DWORD) = 1

    • EnablePathMTUBHDetect (REG_DWORD) = 0

    • DisableIPSourceRouting (REG_DWORD) = 2

    • EnableDeadGWDetect (REG_DWORD) = 0

    • PerformRouterDiscovery (REG_DWORD) = 0

    • EnableICMPRedirect (REG_DWORD) = 0

  3. Ensure that the following registry entries (listed in NT4WS_AFD_Params.reg in the Tools and Templates folder) are applied to the HKLM\SYSTEM\CurrentControlSet\Services\AFD\Parameters key:

    • EnableDynamicBacklog (REG_DWORD) = 1

    • MinimumDynamicBacklog (REG_DWORD) = 20

    • MaximumDynamicBacklog (REG_DWORD) = 72000

    • DynamicBacklogGrowth-Delta (REG_DWORD) = 10

  4. Exit Registry Editor.


Extending the concept of the perimeter network is a useful measure for securing older systems, one that offers additional protection within the overall design and configuration of the entire network. These natural chokepoints, combined with the traffic control provided by personal firewall software and the native port filtering and IP tuning capabilities of Windows NT 4.0, give you multiple layers of protection that enhance the specific hardening instructions that subsequent chapters in this guidance offer.

More Information

  • The NTBUGTRAQ mailing list is an excellent resource devoted to the discussion of current Windows security issues. The list moderator, Russ Cooper, is not affiliated with Microsoft but is extremely knowledgeable. To subscribe to the list or read the archives, see

  • The CERT Coordination Center (CERT/CC) is a central clearing house for security advisories and bulletins. More information is available at

  • The SANS Institute offers a variety of training, certification, and research focused on computer security issues. More information is available at

  • The US Computer Emergency Readiness Team (US-CERT) focuses on computer security advisories and threats for the United States, but is a good source of information on current threats. More information is available at

  • The National Security Agency offers a variety of secure configuration guides covering multiple operating systems and applications at

  • RRAS for Windows NT 4.0 is available on the Routing and Remote Access Service Download page at


Get the Microsoft Windows NT 4.0 and Windows 98 Threat Mitigation Guide

Solution Accelerator Notifications

Sign up to stay informed


Send us your comments or suggestions