Lights Out Operation Guide for Windows NT Server

Archived content. No warranty is made as to technical accuracy. Content may contain URLs that were valid when originally published, but now link to sites or pages that no longer exist.
On This Page

Setup, Updates and Removal
Load Testing
Additional Resources
Appendix A


Network operators and large corporate clients are finding it easy to remotely setup and manage servers running the Microsoft® Windows NT® Server network operating system. Although on first inspection this would seem unlikely, given the graphical user interface (UI) that one typically associates with Windows NT, but administrators and managers of large distributed environments have discovered that the Windows NT operating system also lends itself to a more "traditional" remote management approach in which a graphical UI plays no part. As such, Windows NT can be managed in a lights-out environment just as easily—and in many cases more easily—than traditional legacy systems.

Indeed, the challenges of running systems remotely are many, but the Windows NT operating system has been designed to meet the needs of operators wishing to run their networks in this manner.

  • Systems running Windows NT are ready to run in "headless mode"—without a local console device or keyboard attached. The Windows NT operating system can be installed, configured, and booted remotely by administrators using a VT100 or ANSI-based UI over an in-band or out-of-band communications channel. It readily supports the execution of complex administrative batch jobs—even those that spawn processes that run on other machines.

  • Systems running Windows NT can be monitored and administered remotely; they can be configured to alert an administrator as a function of specified conditions, even configured to restart automatically under designated conditions.

  • Systems running Windows NT have a variety of security mechanisms that can both ensure the integrity of the systems themselves and prevent unauthorized access in a lights-out environment.

This guide, and tools provided in the download that accompanies this guide, are designed to provide information about setting up, managing, and running Windows NT-based systems in a lights-out environment. The guide provides samples of scripts and setups, and more samples can be found in files in the accompanying download. Additionally, throughout this guide there are URLs pointing to Web sites that can provide still more information further support the job of managing Windows NT-based systems remotely. There is a great deal of knowledge about running Windows NT in a lights-out environment, and much of this knowledge exists in the public domain. This guide is designed to help you access more of that information.

For more discussions about management approaches for Windows NT, visit

Tools Download:

To install the Lights Out Operation Guide tools for Windows NT 4.0:

To install the Lights Out Operation Guide tools for Windows 2000:

Setup, Updates and Removal

Because the familiar Windows NT graphical UI is not optimized to support remote management—particularly under out-of-band network conditions—and because the graphical UI is not designed to support batch scripting capabilities, administrators and managers of data centers who wish to manage multiple sitemaps remotely require a more traditional approach to system management. One common approaches relies on an out-of-band ANSI- or VT100-style terminal session, which allow a command-line approach to system management and administration. Other out-of-band, as well as in-band approaches are also used, and the following sections outline a number of alternative approaches and interfaces for remotely managing Windows NT–based systems.

For more generic discussions about management approaches for Windows NT, see




Access to host via a connection that doesn't use the IP network


Access to host via a connection that does use the IP Network

COM Port

A 25- or 9-pin serial port using direct modem or terminal server

Net Port

Operating system TCP/IP connection services

Out-of-Band Access

BIOS & Boot Device Control via COM port/VT100

This functionality is often provided via a hardware board from the appropriate system hardware supplier. With the advent of the NetPC, more hardware vendors are releasing NetPC type boards for their high end server lines to support out-of-band access during periods of network or operating system connectivity failures. Compaq[/DEC], and HP's out-of-band management boards support both BIOS checking and modification as well as control of what is the boot device and ansi terminal access to the boot process and subsequent operating system load. Intel has released the Server Manager Pro solution as software and an PCI bus add-in card that provides similar functionality for hardware platforms that don't already provide a solution for this type of functionality ( An alternative that doesn't require specific host hardware or addition of boards to your hosts is the "Emerge Remote Server Access" solution from Apex ( This approach provides less low level hardware control but does enables remote access to the console video/keyboard. By connecting to the Emerge RSA you can access to conventional video switcher devices often used in datacenter racks to link all hosts back to a single video/keyboard that up until now had to also be placed in the racks. Using this product in conjunction with a remotely controllable power supply such as the "MasterSwitch" from APC ( completes the solution to enable the full spectrum of out-of-band services necessary to operate a host that does not ship with this type of functionality built-in. One example of hosts that come with all these features built-in are the Compaq rack mountable servers which enable this functionality with their "Integrated Remote Console" services. In the accompanying download you will find notes on enabling and using this particular service under .\$oem$\support\oobSvcs. In all cases you are looking to enable access to the standard Windows NT ANSI boot screens after issuing the command to recycle the power, at which time one can select alternative or backup versions of the operating system from which to boot. These cards also support setting the boot device to a floppy, CD-ROM, or the network—at which time the system can recycle and run a script setup to overwrite the corrupted operating system installation or boot from a known good version of the operating system kept somewhere on a central network. Once the operating system is again running, one can then use the in-band connectivity options discussed in this section to apply any necessary customizations or restore a known good copy of a particular operating system setup and configuration to get back to the state existing before the problems arose.

With this approach one can also consider having hardware vendors drop ship new or replacement systems directly to a regional data center with no pre-configuration. One can bring the system on-line and custom configure the system entirely from a centralized operations center.

Operating System and Services Control via COM Port/VT100

This approach relies on a terminal server service to the Windows NT installation. ISVs such as Seattle Lab ( and Softway ( currently provide these types of services for Windows NT. Also note that for multiple COM Port access it is useful to consider the use of a Livingston ( , Cisco ( , or 3Com ( terminal concentrator, which enables a single modem or Telnet/IP connection to a remote datacenter to gain access via null-modem jumpers to any of a number of COM Ports on rack-mounted headless systems.

In-Band Access

Operating System and Services Control via Net Port/VT100

This approach uses a remote shell, secure shell, or telnet server service to the Windows NT installation. ISVs such as Seattle Lab ( , DataFellows ( and Softway ( provide these types of services for Windows NT. Microsoft recently announced the delivery of Services for Unix ( , which also provide Telnet server services for the Windows NT environment. In the future these protocol servers will also make it possible to run remote shell and Telnet sessions using the IP Security (IPSec) protocol specification, which provides more control over authorization and more protection for data passed over the connections.

Hardware, Operating System and Services Control via Net Port/HTML

A hypertext markup language-based (HTML) or "webAdmin" front-end to many common Windows NT management tasks is included in the Windows NT Resource kit. It is also available at

Most of Microsoft's recently released Internet services provide both MMC interfaces as well as HTML-based counterparts for administering and configuring the service. Such is the case with all of the components in the Windows NT Option Pack (e.g., Internet Information Server) and the Site Server product line. Hardware vendors also often provide an HTML-based version of their management tools to support viewing current hardware diagnostics and configurations over HTTP connections.

Hardware, Operating System and Services Control via Net Port and Win32

The Windows NT Server Terminal Server Edition enables operators or administrators to run Win32® API sessions on the given Windows NT-based host from remote Windows-based terminals. This enables a management resource to remotely use any Win32-based user interface that would normally be accessible from the host's console, much like the XClient/XServer environment for the UNIX operating system.

This approach for remote management access to the host works best for one-off tasks while the VT100 user interface and associated command line tools work best for batch processing management tasks. Alternatives for remote console capabilities that do not require a different build of the operating system are products like Computer Associates' (formerly Avalan's) Remotely Possible ( or and Compaq's Carbon Copy ( —or the remote management console support provided in the Microsoft Systems Management Server ( product. Note that the hardware vendor management tools almost always provide a Win32 user interface version of their tools to support viewing current hardware diagnostics and configurations. Additionally, Microsoft's recently released Windows NT Server Terminal Server Edition has Remote Win32 services enabled in the basic installation. Note that with this edition of the OS none of the Remote Win32 sessions currently enable access to the native console video/keyboard session but rather each are unique virtualized Win32-based sessions unto themselves.

Intel's Remote Management Solution Set

Intel's Remote Management Solution software set allows the user to remotely reboot and run diagnostics on Intel architecture-based servers and to utilize the remote management capabilities provided by Computer Associates' Remotely Possible product. UNIX command line interface is provided by Softway Systems' Interix product. This solution set is a software-only solution and does not require the purchase of additional hardware. The solution set includes whitepaper and individual cookbooks with step-by-step instructions to setup and use key remote management functionality.

With these technologies, ISPs will have a comprehensive management solution for Intel-based servers running the Windows NT Server 4.0 operating system. The Remote Management Solution Set provides these features with the software in the table below.



Remotely Possible 4.0 from Computer Associates

GUI based remote control software

Interix 2.2 from Softway Systems, Inc

UNIX system environment for Windows NT Server 4.0 OS

Intel-developed Remote Diagnostics Infrastructure

Floppy based, software-only Remote Diagnostics Infrastructure

Secure connection

Microsoft's VPN based on PPTP/RAS

Scripted Operating System Setups

From a non-partitioned state one must be able to execute an initial setup of the operating system and subsequently install a base set of services that will enable it to function in a specific role (such as a Web protocol server). Often this process is referred to as "jumpstarting" the system. Extensive coverage of support for scripted setup of Windows NT Server-based systems is provided in the .\$oem$\training\unattend.doc and also on documentation. In the in the parent folder of the accompanying download (i.e. $oem$) you will find a scripted operating system and services setup process based on the OEM unattended setup model tailored for Network Operators. The readme.txt found under $oem$ folder will provide information on how to make use of this sample working solution. This provides an example of how to make use of the services discussed in the deployment guide to perform a network boot and initiate scripted operating system and service setups on headless servers. In addition with W2k you have the option of using similar services such as the "Remote Install Service" in either what is know as "Flat" or "Prep" mode. Flat mode uses a shrink wrapped network boot disk and minisetup routine to initiate unattended setups very similar to the Oem unattended setup process. Prep mode differs in that it connects the host to the network and copies a pre-configured image [created using sysprep] of what you want in operation and then subsequently updates the configured hardware and security identifier's to provide the new system uniqueness on your existing network. Currently the Remote Install Services work only with hosts that use the set of NIC's that have bootprom support which can be expected to expand over time to be a standard option on all NIC's. See the W2ks on-line documentation for more information on the use of these approaches. Many 3rd party ISV solutions that use image copying approaches also exist that allow you to take a properly configured host and then enable administrators to restore this image and replace the host's security identifiers, effectively creating a new, uniquely-registered system with the same configuration. Some good examples of these types of offerings are ImageCast and Ghost, which can be reviewed on and respectively.

Scripted Services Setups

Like the Windows NT operating system itself, most current Internet protocol-based solutions from Microsoft and other vendors provide scripted setup support. Often acquiring the syntax to process this type of setup requires no more then calling the specific setup routine from a command line with the "/?" or "/h" argument. Where help is not provided at this level, reference to the vendor's documentation is required.

Many of these scripted setup initialization calls are included in the accompanying download's .\$oem$\unattend\postOs.bat script included in this download. Since scripted setups for most of the fundamental network operator services are provided in this sample setup process, it should provide a initial starting point from which to grow a set of models for remote setup of services. Examples in this script show calls to execute the scripted installation of services that use the Microsoft Win32 setupApi services and AcmeSetup services, InstallShield scripted setups, and the most recent innovations in setup, which is the Microsoft ActiveSetup services. Since these cover 80 to 90 percent of the setup models used by Win32 services today, one should have ample examples of how these scripted setups can be achieved given the knowledge of a services setup methodology. In the worst case scenario, where scripted setup is not supported, one can fall back on the "SmsInstaller" or "Windows Installer Limited Edition" or "SysDiff.exe" utilities provided as part of the Sms20 and Windows deployment tools retail bits. The use of these utilities is described in the Windows NT deployment guide and in the Windows NT Resource kit. Several services in the download turnkey sample provided use this utility to enable scripted setup, given the absence of one provided by the ISV who delivered the service. Refer to the Systems Management Server Installer documentation for assistance with the features that allow one to work around the lack of silent mode setups in a currently released service.

The following are some general notes on how to run scripted setups using services based on each of the mentioned setup infrastructures.

Microsoft SetupApi Setup Process

rundll32 setupapi,InstallHinfSection <sectionName> 0 <infFile.inf> 

Microsoft Acme Setup Process

acmsetup.exe /g <logfile> /qn1 

Note that "acmsetup.exe" can sometimes be stored on source media as "acmboot.exe" or "setup.exe". Generally Acme setup process require an STF and an INF.

Microsoft ActiveSetup Setup Process

setup /g <logfile> /qn1 

InstallShield Setup Process

setup /r /f1<responseFile.iss> /f2<logFile.log>, 

This will create a scripted silent mode setup response file

setup /s /f1<responseFile.iss> /f2<logFile.log> 

This will process the scripted silent mode setup response file and setup the service. Add the flag "/SMS" is running setup from a UNC instead of a local or mapped network drive.

Systems Management Server Installer for Lack of Scripted/Silent Mode Setup

Run the GUI SmsInstaller tool to create the setup.exe for the service you want to setup. Then use setup.exe /s to run this setup in silent mode from command lines on all other systems.

SysDiff for Lack of Scripted/Silent Mode Setup

Use the SysDiff utility for services or configuration steps that don't have unattended modes of operation available already. This utility can be found in the "<cdrom:>\Support\DepTools\<platform> folder on the Windows NT retail CD or the copy included in the OemFiles folder provided in this download. This utility will track what happens to a system during a manual setup process and create an INF file that can be processed by the Microsoft setupApis to setup the service in scripted silent mode. Use this method as a stop gap until newer versions a service are released that support scripted silent mode setup.

Even when this model exists SysDiff can still be useful in the cases where would like to generate a security audit trail of how a system is effected by any given service setup. The following is the general flow of commands that you use to general scripted silent mode setups using the SysDiff utility.

sysdiff /snap /log:before.log c:\temp\before.img 
// takes a snap shot of current state 
sysdiff /diff /log:after.log [/c:comment] c:\temp\before.img c:\temp\after.img 
// compares current state against snap shot 
sysdiff /apply /m c:\temp\after.img 
// applies the package in a scripted silent mode driven by the sysdiff utility 
sysdiff /dump c:\temp\after.img c:\temp\after.txt 
// dumps text output about package 
sysdiff /inf /m c:\temp\after.img c:\temp 
// creates inf file and cmdLines.txt entries 

The master service scripted setup example provided in this download uses cmd.exe supported batch commands. For administrators with a background in the Practical Extraction & Report Language (PERL) there are many PERL interpreters for Windows NT available and, more importantly, PERL interpreters with support for calling COM Automation interfaces and a subset of the Win32 APIs. Examples of PERL interpreters provider that offer this support can be found on or .

The standard issue installation of Windows NT plus the Windows NT Option Pack enables what is referred to as the "windows command script host." From the command line the command script host supports running scripts written in JavaScript or Visual Basic, Scripting Edition and PERL. This list will automatically expand going forward to support new ActiveScripting engine providers as they become available. Many useful command shell scripts can be found already in the "<drive:>\<winsysdir>\inetsrv" folder.

Compartmentalized setups

Two approaches currently exist to facilitate this type of setup. The first exists in the current beta release of the Windows NT Server Terminal Server Edition which controls the setup run by any user other than an administrator at the console and writes all system folder-destined updates and configuration files to the user's personal profile folder structure. In conjunction with pre-configured application folders for a given user, this effectively allows an administrator to compartmentalize a given user's service setup.

The second, and more generic, approach to compartmentalized setup is to use the Systems Management Server Installer or the SysDiff utility to first track the setup of a given service on a test system. Once this has been completed, one then reviews the registry, files, and security settings that the setup tries to execute and then creates a customized version using the Systems Management Server Installer or SysDiff scripting services, controlling where system and application folder-destined files and registry entries will actually be written when applying the service on a production system.


The terms "high availability" and "fault tolerance" are often discussed individually but they deal with similar issues. A high availability system requires 99.999 percent uptime—that is, less than 5.256 minutes of downtime per year on a 24 hour-per-day, 7 day-per-week, 365 day-per-year service schedule. Fault tolerance refers to, and is often achieved by, the use of hot swappable devices such as drives, random access memory (RAM), central processing units (CPUs), and power supplies. Solutions involving uninterruptible power supplies (UPS), disk mirroring methodologies, or even clustering operating system services ensure the continuous delivery of services, even in situations in which hardware or software failures occurs. High availability and high reliability are actually goals, whereas fault tolerance is an approach to meeting those goals. In the accompanying download you will find a detailed discussion on this topic from the perspective of "reliability" under .\$oem$\support\availScale\availMsft.doc

Traditional approaches to high availability have been via a mixture of both hardware and operating environments. These highly available hardware environments have been achieved by purchasing fault tolerant hardware solutions with redundant components in the areas of storage, CPUs, RAM, and in some cases input/output (I/O) bus services. Adding the capability to swap out of broken components while the system is operational (hot-swapping) enables operators to replace these devices and bring the system back on-line without incurring a shutdown of the operating system and services. The Windows NT-based hardware compatibility list—on—shows the existence of many offerings that provide these types of solutions. Compaq, DEC, Tandem, Hewlett Packard and NetFrame are just a few major manufacturers that offer these types of platforms.

Work can also be done up front to ensure that services running on a particular hardware and operating system combination are reliable and will operate with a certain level of stability. In the Windows NT operating system environment, this is partially achieved through the Win32 logo certification process. Any Win32-certified application has undergone testing by Veritest, an independent testing laboratory ( Veritest ensures that an application is comprised of true 32-bit code and it passes a rigorous set of Microsoft standards before earning the Win32 logo. More information on the logo certification process can be found at .

In addition to Win32 logo certification, reliability is often impacted by the ability to detect problematic processes [dlls, a.k.a. shared libraries] that when called by a parent process and begin execution on a thread eat up all of the system resources. Even if all system resources are not used up there may be enough reduction in system performance that the a service's performance degrades to an unacceptable level. Isolating and issuing a "kill" against the process that owns this thread is very similar to how one might approach diagnosing the problem on UNIX. The associated download to this paper provides some utilities from the Windows NT resource kit that allow you to do similar things on the Windows NT platform. For instance a command to list currently running processes and their resource allocations is "tlist" which is available in associated download or Windows NT resource kit. Once the troublesome process has been detected it can be shutdown using the commands such as "kill <process name | id>". This utility is also available in this download as well as the Windows NT resource kit and provides functionality similar to its UNIX counterpart. In Windows NT 5.0 an improvement is planned in the parent/child process creation routines that will enable parent processes to keep handles to all child processes that have been spawned and thus allow proper cleanup of all processes by simply stopping the parent.

Fault tolerance can also be achieved either through hardware or operating system clustering services. Microsoft's Enterprise Edition of the Windows NT operating system supports the creation of two-server clusters and provides load distribution services when both systems are fully operational. In the event that either system fails, regardless of whether the failure occurs at the hardware, operating system or service level, Windows NT Enterprise Edition shifts the services provided on the failed system to the remaining system. Later in 1998, Microsoft plans to release an update to this clustering service, increasing the number of cluster members to four and providing automatic load balancing of services when all four systems are fully operational. That provides both fault tolerance and also a powerful scalability solution in one.

In addition to using reliable software and hardware, network environments can also be engineered to provide high availability and fault tolerance. This approach usually involves linking redundant paths into an autonomous system housing a set of services and the use of virtually-advertised IP destinations that can easily be load-balanced across several systems when everything is functioning and dynamically routed to hosts still in service during times of failures or scheduled maintenance. Microsoft has recently announced their Windows NT Load Balancing Service which is a derivative of the Convoy Cluster Software by Valence Research, Inc. ( that allows one to load balance at the IP layer to create clusters for a given IP of up to 32 nodes.

Finally, consistency in your server configurations can play a major role in the availability, reliability and fault tolerant capabilities of a service. If systems require operator intervention to answer setup dialogs and provide configuration responses, there exists room for human error that creates opportunities for creating unstable systems as well we for configuring systems that are stable but inconsistent from one system to the next. Using the materials discussed in the scripted operating system and services section of this guide can help guard against availability and reliability issues associated with configuration matters. Scripting setup procedures ensures that the system configuration is identical across multiple machines and overcomes the problems associated with accidental operator input error. It also creates an environment that is easier to debug when a problem arises. If you are experiencing a problem on a production server, then that same problem should exist on all identically-configured systems. If not, then the system in question is experiencing a problem that is not related to the operating system and scripted services setup. Moreover, unattended setup scripts enable administrators and analysts to recreate a system setup and configuration in a non-production environment, where problems can be emulated, tested, and remedied outside of the production environment.

In Windows NT version 5.0, many features are scheduled to be added to improve upon high availability and fault tolerance. Most notable are the improvements in parent and child process associations and in clean up when problems arise with the any one of the processes involved in a given task. In release 5.0, there will exist the concept of "Job Objects," under which any and all processes can be closed and have their resources returned to the system pool simply by stopping the Job Object. Windows NT version 5.0 is also designed to provide greater kernel control, even in the event of a system hang, which will enable administrators to log problems more effectively and to recycle or attempt a restart of the operating system more expeditiously.


There are a number of standard approaches to scaling services in the Windows NT environment. Often the approaches discussed in the "high availability/fault tolerance" section double as methods for scaling services within you environment. The assumption in a discussion of scaling is that one is trying to improve the performance of a service on a standalone server—or on an individual member of a group of servers. If scaling requirements have exceeded what one system can achieve optimally, there are additional methods for scaling the operation of services in a multi-server environment. In many cases, the models discussed below provide an additional level of fault tolerance and thus aid in addressing availability and reliability issues.

Hardware Updates

Add storage controllers and devices

This increases disk I/O throughput, allowing a disk-intensive service to operate at higher rates. Given today's drive seek and read speeds, it has been found that as few as 2 or 3 drives in a given RAID level 0 set are capable of saturating Ultra SCSI storage adapter channels. Therefore be aware that there is a point after which you will not gain improved disk seek and read performance by adding more drives to your striped set. In this case you are better served by added another RAID level 0 striped set on the next channel available on a given storage controller—or by adding another storage controller to gain an additional channel to which you can connect the new drives. Monitoring disk I/O performance monitor counters can tell you if this step will improve performance.


More RAM provides more working space for a service and reduces the number of paging file swaps. It also provides more room for caching of disk-based information. Large amounts of RAM leveraged as a disk cache can effectively dismantle the disk I/O bottlenecks that often compromise system I/O scaling efforts. Monitoring physical memory and cache hit rate counters can indicate whether this step will improve performance. Some services do not gain benefit as much from caching—including current implementations of many services supporting on-demand or live streaming, where data is always pulled directly from disk, or live (versus cached) hardware encoders—due to their transient nature.

Add CPUs

Increases the performance of a given service by allowing it to spread its client and server processing requirements over an increased amount of CPU horsepower. Monitoring CPU performance monitor counters can tell you if this step will improve performance.

Add More Network Interface Cards to Physical Network Path

This increases the net I/O throughput, allowing a traffic-intensive network service (i.e. one with a high volume of client requests and server responses) to operate at higher rates. Monitoring net I/O performance counters can tell you if this step will improve performance. When adding multiple NICs to a system, is best, where possible, to assign each to a separate logical/virtual subnet when only one physical network exists. This enables the stack to have a default route associated with each NIC so that not only inbound traffic will be load balanced but so will outbound responses. Given hardware and operating system driver overheads, there is an eventual cap to scaling NIC I/O. Administrators are advised to monitor network counters to see when this cap has been reached.

Tuning the Operating System and Services

If each of the approaches above have been leveraged, then additional performance improvements can be gained by tuning the operating system and the specific services running on it. Often the drivers for key hardware components (storage, RAM, NIC) can be tuned to increase service performance. In the case of a NIC, the frame size used when placing IP packets on the network can often be tuned larger or smaller, depending on the type of net I/O that is most common for the services you have running. Similarly the NTFS file system driver supports some levels of tuning that allow for the use of larger Windows NT file system transaction log caches or that disable the creation of MS-DOS® 8.3 file name entries in the directory and file tables—both of which can improve storage write performance, provided that these changes do not affect the set of services you have set up on your server.

Another example is the server service on a Windows NT-based system, which can be tuned to optimize RAM for file caching activities or for network application communications—which ever is best suited for your environment. A similar setting can be applied to the host itself, telling it whether to ensure better performance for applications being run interactively at the console or for the services that it is operating in the background as daemons. In each case, administrators can review the contents of the .\$oem$\support\tuning folder provided in the associated download with this document, which provides examples of how to apply some of these common settings that one might want to use to get the optimal performance from their server.

Add Another Server

If you have reached the maximum or optimal price point for improving each of the previously mentioned components of your standalone server, then the next step is to consider the addition of a new server to support the optimized delivery of your service. This consideration leads to a discussion of leveraging any one of the scaling options outlined in the remainder of this section.

Virtual IP's for Front End Processors

This approach is achieved by advertising a single name/ip to the public that can be serviced by many hosts back in the datacenter. Dns Round Robining has often been use to apply this type of solution to a group of front end processing machines. The problem with this approach is that it really only virtualizes the name advertised to the public and it requires manual intervention when a host fails to stop client requests from being routed to the failed host. Independent software vendor (ISV) Cisco has a product called LocalDirector that provides true virtual name/ip services (see , and in addition the Cisco LocalDirector automatically removes the failed system from of the list of servers currently capable of handling client requests. Similarly, F5 Labs Big/IP2 ( and the RND WebServiceDirector ( provide this functionality as well. Microsoft's recently addition of the Windows NT Load Balancing Services which is a derivative of the Convoy Cluster Software by Valence Research, Inc. allows you to provide similar functionality from within the OS itself ( Query your favorite Web search engine for links to additional product vendors and information threads.

Clustering of Read/Write Stores

As discussed earlier, Microsoft's Windows NT Enterprise Edition enables one to cluster two servers together in support of load distributing services when both systems are fully operational. It then provides fault tolerance when a given system fails at either the hardware, operating system or service level, at which time the services provided on the failed system are shifted to the system that is still running. Later in 1998, Microsoft plans to release an update to this clustering service that increases the members of the cluster to four and provides automatic load balancing of services when all systems are fully operational, which will provide fault tolerance and a scalability solution all wrapped into one. One thing to keep in mind with clustering solutions is that tested and optimized support for the services will vary.

Centralized Storage Farms

Most Virtual Ip of Feb solutions as outlined above will be configured to have each of the participating hosts acquire their data from a single reliable data storage point, such as an Auspex ( , Network Appliance ( , or Origin2000 [Oynx] ( storage server via the network file system ( NFS) or common internet file system (CIFS) protocols. This enables easy addition, replacement or removal of any one of the Fep's because local disk configuration is very generic and all of the transient data is kept on the centralized storage farm service. This configuration, used in conjunction with a scripted operating system and services setup as discussed in this document, enables the easy removal and reconfiguration of a corrupted system remotely—or the easy addition of a new remotely-configured system—and to have it serve up immediately the same transient data being delivered by all other servers. Since disk access will now require crossing the network then it implies that you will want at least switched 100 Base-T or FDDI network for servers to connect to the centralized server farm in order to ensure that transfer speeds are reasonable. Software such as Microsoft Internet Information Server can connect to network storage farms either via the Unified Naming convention (UNC) or mapped network drive letters. Microsoft SQL Server can likewise connect to its data store via a mapped network drive. Since it is common with these and other services with the operating system on the first partition (C:>) and the transient service data on a second partition (D:>) then this model is configured similarly provided you map the network drive as D:>. Many other storage farm solutions exist each with their own unique methods for ensuring high performance and fault tolerance read/writes of data to servers across the wire. Other products to investigate would be EMC2, DEC StorageWorks, and Sun's Storage Array.


In the simplest of non-lights out operation configurations, maximum security can be enforced simply by locking access to the room where the console and keyboard for the server in question are kept. This effectively prevents anyone from gaining administrator access to the console and prevents unauthorized changes to the system. One should also apply the Windows NT account policy setting that forces administrators to "log on locally" to gain access to the console.

In a lights out operation, all operations, configuration, and management activities are undertaken remotely, so more traditional models for implementing solid security solutions apply. As a start, assuming that all the systems in question are headless implementations, one should take the precaution of silicon covering the integrated video board and keyboard connectors. This will prevent someone who has gained access to your systems room from accessing the console by simply plugging in a display and keyboard. Many high end server solutions today provide options in BIOS to disable the console video and keyboard servers to achieve the same result. Alternately, one could configure Windows NT-based account policies so that administrators could not "log on locally," which would achieve the same end result.

Note that some hardware solutions require a keyboard nub so that the hardware BIOS detects that a keyboard is in fact plugged in during system boot, as some BIOS implementations halt when a keyboard is not detected. As for the video board, Windows NT requires this to be treated as its main console port. Having a monitor attached is unnecessary, but you should configure the system with an integrated video board that allows the operating system to properly load a standard 640x480, 256 color VGA driver.

The above being said we will now look at the several traditional approaches to securing a "lights out operation" configuration of Windows NT.

Operating System and Services Installation Settings

Operating system installation settings involve the various security aspects of an operating system, such as users, group membership, access control lists, registry hive access, and other parameters. Manipulating the operating system installation settings allows one to fine tune those settings so as to limit the set of features one can access on a host. For an in-depth look at applying appropriate security settings to Windows NT-based systems running on public networks see the very detailed discussion produced by the NSA at Complementing this paper is the System Configuration Editor (SCE), delivered in the Windows NT 4.0 Service Pack 4. The SCE provides a Microsoft Managment Console (MMC) snap-in and a set of default security configuration files that allow one to define and apply a set of operating system and services security settings easily. The SCE also allows one to analyze a given host to see if its configuration still maps correctly to the settings that were defined when an SCE configuration was run on a previous date. This enables an organization to test a host regularly to ensure that it is still locked down properly.

Router IP Filtering

This solution uses protocol and request in/out port filter settings (provided in your router vendor's operating system) to control entry to your network—and, by extension, your servers and the services they are running. The "ipfilters.doc" file (included in the .\$oem$\security folder of the download) provides an outline of the input and output filters that must be enabled to protect some of the most common services that might be hosted on a Windows NT-based system. See also "ipfilters.bat" file in .\$oem$\security, which stands an example of how to apply these filters in a script). One drawback to implementing IP Filtering on a router is that it can eventually introducing a network I/O bottleneck, since all packets flowing in and out of a network must first be inspected by the router.

Network Address Translation Gateways

An alternative to filtering is to use Network Address Translation (NAT) services to make a server and its services available to the public. NAT enables one to apply security settings on a lone system (or on a set of systems) that proxy (or "NAT") requests from clients on the public network to your servers and then return responses back out to the clients on the server's behalf. Several ISV-based NAT services running on Windows NT–based systems are available today. Commonly used services of this type include Axent's Raptor Firewall for NT ( and Checkpoint's FireWall-1 ( Microsoft is also currently working on NAT services to be delivered in the box for Windows NT version 5.0. The included "ipfilters.doc" file provides an outline of the input and output filters that need to be enabled for some of the most common services that might be hosted on a Windows NT-based system. As with router-based IP filtering, one drawback to implementing NAT-based filtering is that it can eventually introducing a network I/O bottleneck, since all packets flowing in and out of a network must be first inspected by the NAT. Moreover, since NATs act on behalf of a network's servers and services at an IP level there may be cases in which an ISV has not enabled its NAT solution to support a service that you are trying to host.

Operating System IP Filtering

This solution uses the service protocol and request in/out port filter settings provided in the Windows NT operating system to control the entry to a system and its services. In Windows NT, this level of functionality is provided via the Routing and RAS services, released as a post Windows NT 4.0 update. All the filtering settings one would expect to find in a router vendor's operating system (e.g., Cisco IOS) setup for interior network services are available in these routing services on Windows NT. In the accompanying download you will find a .\$oem$\security\ipfilters.doc file that provides an outline of the input and output filters that need to be enabled for some of the most common services on a Windows NT-based system. The benefit to the operating system IP filtering approach is that IP Filter processing takes place against only the packets that are intended for the host in question, so the approach does not introduce a bottleneck or single point of failure in the network infrastructure.

Service Authentication

Most popular Internet services such as HTTP, NNTP, or POP3 provide the ability to authenticate users before allowing access to a service. Once a user has arrived at this point, of course, the user request has already passed through any IP Filter configurations implemented on the router, NAT or server. Although the actual wire protocols for negotiating the authentication with the end client differ slightly from service to service they all basically proceed with the service sending out a Microsoft Access challenge to the client followed by a Userid/Password combination response returned by the client. Provided a match can be found in the configured security list, access to the service is allowed. This service authentication functionality exists with all Internet and non-Internet-based services running on Windows NT and can be pointed at various security lists. In the case of the Windows NT Option Pack –with Internet Information Server, for example, service authentication can be validated against the Windows NT Registry, Site Server Membership, or UNIX/etc/passwd security lists. In the case of the Windows NT Option Pack—with Internet Access Server, radius service authentication can be validated against the Windows NT Registry, Site Server Membership or any ODBC accessible data source.

Service Operational Security Context

Services on Windows NT run very much like daemons on a UNIX platform. Each service configured to support client connectivity can be viewed remotely using a "net start" or "tlist" command, which will list all the services in operation. "tlist" is a command line version of the GUI "task manager", and the output is similar to the output generated by a UNIX "ps" command: all running services are listed with their current process ID, CPU usage figures, and other information. The "tlist" utility is included in the accompanying download under .\$oem$\unattend\utils, and additional documentation for the "tlist" utilities usage can be found in the Windows NT Resource Kit.

Administrators can remotely control the security context for a service via the "regedit /s <regFile.reg>" command at the command line. The easy way to create the <regFile.reg> file is to use the control panel GUI services on a locally-accessible development system to configure the service with the desired operational security context—and then to save these service key settings to the file <regfile.reg> using the "regedit" GUI mode functionality on the local system. One can then apply this setting to the production system using

regedit /s \myLocalHost\c$\temp\<regFile.reg> 

which saves the step of having to transfer this configuration file to one of the remote host's drives. Using the "net stop <myService>" and "net start <myService>" commands will then restart the service, which is now configured to run in this newly defined security context. This limits the damage anyone who has managed to gain access to your systems (through accessing one of the services on the system) can do by limiting that person's access to those things that the service userID allows on the local system.

Kernalized Windows NT Setup

A kernalized Windows NT setup refers to an implementation of Windows NT that contains the minimum set of system and application files necessary to run a set of services. Microsoft currently works with many router, NAS, resource sharing, and black box companies that deliver these types of Windows NT-based setups on their hardware or other form factors. There are also ISV solutions such as the toolkit from VenturCom ( that allow one to analyze an active host and list all the system files that are not in use when all the services one wants to operate are configured and running. One can then run a script against this list to delete all the unused files from the host. This effectively allows one to limit the number of system files that are vulnerable to attack if an unauthorized individual is able to get past the IP Filters and Service authentication walls.

Testing and Resetting Security

It is important to test production server security regularly in order to ensure that the fundamental security subsystem is still intact and that the server is properly locked down from an access control list perspective. The Windows NT Server Terminal Server Edition includes a utility similar to that found in the Windows NT Resource kit, and that will validate the file system, registry and security list and apply C2 level security policies to each object it encounters, according to the predefined low, medium and high option settings. The Windows NT 4.0 Service Pack release 4.0 includes a Security Configuration Editor, which provides a more extensive set of features for analyzing and reporting on the current set of security settings for a given host. In addition constantly covers the latest news and methods of ensuring the integrity of a host's security with respect to protocols and services. The NSA has also contracted Trusted Systems Inc. to produce a tool called the "Super Checker," which is currently available on

The SysDiff and Systems Management Server Installer tools (discussed of this document) are very useful for tracking the security implications and compromises to system security made by the setup of a any new service on a host. These tools can track, log, and report all changes made when any setup process or day-to-day service operation alters a configuration setting on the host. A combination of these services should be used periodically to ensure that the addition of new services or access to the system by remote users has not jeopardized the security subsystem.


A variety of management services are available from both Microsoft and other vendors. These services enable administrators to maintain information about the systems and devices on a distributed network and to remotely monitor and respond to events taking place on the network. These are also crucial tools for remotely managing software installations and updates, as well as powerful tools for remotely managing and scheduling backups and other routine tasks.

An SNMP agent is shipped in the box then you may want to use it along with the MIB's provided in the accompanying download under the .\support\reporting\mibs folder or with whatever MIBS were provided with your specific application suite. This data can be displayed and in some cases modified using a SNMP Management Console, available from numerous third parties. A new initiative called Web Based Enterprise Management (WBEM) is also underway. WBEM works with a similar model of SNMP agents and consoles. The difference with WBEM is that it's common information manager (CIM) agent structure is extensible and allows data from many different providers to be exposed to the single WBEM console. For example on the Windows NT platform the WBEM/CIM agent gateways and delivers information produced by the SNMP(mib), TMN(cmip), DMTF(mif), Perfmon and EventLog agents.

If your environment does not require the use of a high end management solution like one of those listed below then you may want to leverage one of the fundamental services that is available with a basic OS installation. These would be the Performance Monitor and Event Viewer. Performance monitor will display real time data as to what is going on at any point in time on a given host while Event View keeps a log of critical things that happened on a host since you last cleared the log. Several third party solutions are available as well to extend this basic level of functionality such as SiteScope ( and NetIQ ( are also available for the specific monitoring and reporting tasks that a Network Operator must manage. For additional solutions of this type, see or browse to your favorite Web search engine for more information on monitoring Win32 service.

Microsoft Systems Management Server

Systems Management Server is a key component in Microsoft's Zero Administration Initiative for Microsoft Windows operating system. It provides tools such as hardware and software inventory, software distribution and installation, and remote diagnostics to let you better manage your computing environment, be it a few machines or tens of thousands of desktops. Systems Management Server is designed to help systems administrators lower their management costs by helping them install and maintain operating systems and applications, discover system configurations, and perform helpdesk operations. It is a highly scaleable, WAN-aware product that integrates with the major enterprise management solutions.


Tivoli provides management solutions that make it easier for large and small organizations worldwide to centrally manage all of their corporate computing resources. In addition to providing a centralized set of tools for the remote management of Microsoft Windows NT Server systems, Tivoli provides tools to manage specific Windows NT-based services (such as Microsoft Exchange and the Microsoft Commercial Internet System), as well as resources running non-Microsoft operating systems (such as AS/400 and other systems).

Computer Associates Unicenter TNG

Computer Associates Unicenter TNG provides a open, standards-based framework for the remote management of heterogeneous resources, including resources running the Microsoft Windows NT Server network operating system.

Hewlett Packard Network Node Manager

HP Network Node Manager provides a family of tools for the remote management of systems, including those running Windows NT.

Shell Scripting Services

Currently Windows NT natively supports two scripting languages: JavaScript and Visual Basic® Scripting Edition. In addition to these two scripting languages, Microsoft provides the Windows® Scripting Host, a standard mechanism for third party script engines to plug into the Win32 environment. There are many scripting tools currently available from third parties such as Active State PERL for Win32 ( ), Advanced System Concepts XLNT (a DCL-like language): (, Desiderata Software Winductor: (, and Enterprise Alternatives WinREXX: (, to name only a few. Currently available from Mortice Kern ( is a toolset for Windows NT that provides Korn, Bourne and C shell environments. These shell environments, coupled with the suite of utilities that are provided, give an experienced UNIX user a familiar command line environment with which to work. For users who need "Cron" like capabilities on Windows NT in order to schedule administrative scripts to be run, there are a wide variety of schedulers currently available for Windows NT (in addition to the "AT" command task scheduler service in Windows NT) , from simple personal tools up to the most sophisticated enterprise batch job schedulers, including Advanced Systems Concepts BQMS (, American Systems EZ Scheduler (, and Argent Software AQM and JSO ( to mention only a few. Microsoft also maintains a web page devoted to the latest developments in scripting for the Windows platform. Microsoft also recently announced the delivery of Services for UNIX (, which provides Unix shells and a PERL interpreter for the Windows NT environment.

Many of the scripted setup and provisioning examples provided with this guide use these shell and scripting environments. Query your favorite Web search engine for links to additional vendors and information threads.

Customer Interfaces

In most network operator environments there is a need to provide customers with a way to sign up and manage the services they are purchasing from the network operator. This normally takes the form of Common Gateway Interface (CGI) scripts written in PERL, which populate the HTTP stream destined for the customer with browser-independent HTML based interfaces. Today, hosting and access customers want and need to be able to sign up for new services and modify their existing services without having to contact their Customer Sales Representatives.

Network operators can provide customers with an interface to these services via Windows NT and Internet Information Server, but most are finding that a more flexible and performance-oriented solution can be delivered through the use of active server page (ASP) and Component Object Model (COM) technologies to produce the signup and on-going management HTML interfaces. The additional benefit to building solutions using the ASP/COM framework is that there exists a large body of shareware samples of remote customer interfaces developed by ISVs and Web developers. Several groups within Microsoft have already released to network operators a suite of COM components that are very specialized for the chore of provisioning new customers. Site Server version 3.0 ships with the Active User Object(s), BackOffice® Small Business Server ISP Resource Kit version 1.1, and ICU–Personal Web Pages version 1.0 all come with a very network operator-centric set of component functionality and ASP usage examples, which can be used for building a remote customer interface.

On production servers, one can simply copy and register the desired component functionality in order to enable ASP pages that make calls to these components. Or, if one needs any of these services in and of themselves, the components are copied and registered as part of that service setup. Several of the JavaScript solutions used by the scripted setup discussed later do in fact use components from several of the services to enable advanced functionality from within scripting environments, such as the provisioning of IPs, FTP Web Directories, Web Services, userIDs, and ACLs. Refer to the examples for additional information on how these components might be used. Further examples, including source code, of these types of customer management interfaces are available on Look for Dashboard, Personal Web Pages and the SmallBiz Signup services.

Many third party solutions are being developed to deliver shrink-wrapped solutions that solve the customer self-management issues. A great example of this is the DASH – Distributed Automated Shared Hosting for Internet Information Server solution, available from Digital Foundary Inc. (


While proactively monitoring the operating system, network interfaces, and services is important, it does not automatically mean the easy publication of data to end customers and/or proper trend analysis so that problematic situations can be discovered and mitigated before they disrupt production operations. If you have deployed one of the management solutions discussed earlier then it is likely to have features that will allow you to generate reports for at least a subset of your requirements. In addtion if you are looking to reports such as html/http trend analysis, the Windows NT Option Pack, available on CD-ROM or via download from , provides the Site Server Express "Usage Analyst" and "Site Analyst" products. Upgrading to the Site Server version 3.0 product line will provide access to the full-featured commercial editions of these components.

Whether from the CD-ROM/download or from Site Server version 3.0, these components provide very powerful reporting services on usage (Usage Analyst) and site integrity and layout (Site Analyst). Usage Analyst is the most often required reporting tool, and it can be configured to publish HTML-based reports for end customers on a service instance basis (e.g., for each unique shared web site). Usage Analyst also provides powerful support for a Network Operator's billing, usage, and trend analysis tasks through its ability to parse the detailed logs produced by the services running on a Windows NT-based host and to massage the results into a format that can easily be batch-loaded up into an existing billing or decision support system.

Another common requirement for reporting is in the area quota usage. Many ISV solutions exist today to provide this functionality. The Quota Advisor/NT from W. Quinn Associates, Inc. is just one example. The Quota Advisor/NT product is a Windows NT-based application that processes quotas on Windows NT volumes and their contents. Quota Advisor/NT is installed on each Windows NT-based system that has disks on which one has placed quotas. Subdirectories may be excluded or included from their parent object. Quotas may be assigned on a user or file structure basis. File structure quotas may be assigned to volumes, directories or files. Query your favorite Web search engine for links to additional vendors and information threads.

In the Windows NT 5.0 release, quota management and reporting is a feature/function that will be built into the basic OS.

Load Testing

In preparation for production use it is important to understand the performance you can actually expect a service to provide on a given hardware, operating system and network infrastructure combination. Many application specific tools exist to allow for this type of data capturing but this section considers only a few that are more generic in nature—and thus provide support for broader use when looking to load test a specific service.

Protocol Servers via InetMonitor

InetMonitor, previously called InetLoad, can be acquired in the Microsoft Backoffice resource kit and also can be downloaded from . InetMonitor is a technically advanced load generation tool for Internet based services. It runs on Windows NT version 4.0. It has an easy to use interface and is designed to address multiple Internet protocols, including HTTP, NNTP, SMTP, POP3, IRC, MIC, and LDAP. By scripting the behavior of users (or clients), InetMonitor can simulate almost any user or client profile. InetMonitor supports multiple authentication schemes, including Basic, DPA and NTLM. Because InetMonitor is written to lower level interfaces, it has high performance characteristics. Depending on the protocol and the user profile, InetMonitor can simulate hundreds to thousands of active clients per machine.

InetMonitor is also a capable bottleneck detection program. It detects and reports hardware bottlenecks in real time, and then will generate and log alerts. Based on pre-defined conditions InetMonitor will make recommendations when it detects bottlenecks. In timed mode, InetMonitor can capture system status, periodically average it over the given period and report relevant system information.



The /etc/passwd and /etc/shadow files make up the default security list, group membership, and password settings for a UNIX host. Many ISV solutions are available today to provide this level of integration today via NFS client and server service offerings for Windows NT. FTP Software ( and Softway ( are two such ISVs that currently have solutions of this type. Query your favorite Web search engine for links to additional vendors and information threads. Microsoft also recently announced the delivery Services for UNIX ( that enable integration with /etc/passwd on all major versions of UNIX. In each of these solutions there exists an integration and synchronization solution between the Windows NT security lists and the existing security environment on UNIX hosts.


By default a Windows NT Option Pack Http server installation enables Http Authentication via Anonymous, Basic and Windows-based Challenge/Response models against the Windows NT registry or Site Server Membership 2.0 security lists. Often there exists a need to authenticate users via HttpAuth against existing comma-separated-values text files, UNIX /etc/passwd database-held security lists, or LDAP environments.

  • Support for CSV text files is provided via the ISAPI HttpAuth filter included in the Windows NT Option Pack solution developer kit (<drive:>\inetpub\iissamples\sdk\isapi\filters\authFilt).

  • Support for UNIX /etc/passwd and database held security lists is provided via the httpAuth Integration filter found in .

  • LDAP Http authentication integration can be achieved by installing the Microsoft Site Server Personalization and Membership services, which then allow for authentication to be directed via Active Directory Services interface to an LDAP provider directory.

Common Internet File System (CIFS)

The common internet file system (CIFS) solution is based on the retail file and printer sharing protocols used by Windows-based products. Integration requirements for CIFS come when one strives to provide similar resource sharing support on existing UNIX hosts.

To achieve this, one acquires an implementation of the "SAMBA" service for a particular flavor of UNIX. Refer to the vendor of the UNIX flavor in question to find out how to acquire this service for a particular UNIX host. Once in place, the SAMBA service allows clients configured with CIFS redirectors, such as Windows, Macintosh and SAMBA-enabled UNIX workstations, to connect to shared resources on any host supporting this protocol.

Network File System (NFS)

The network file system (NFS) solution has long been the standard protocol in the UNIX environment for sharing file system resources across a network. Many ISV solutions are available today to provide this level of integration today via NFS client and server service offerings for Windows NT. FTP Software ( and Softway ( are two such ISVs that currently have solutions of this type. Query your favorite Web search engine for links to additional vendors and information threads. Microsoft also recently announced the delivery of Services for UNIX ( that provide an NFS client and server to provide seamless integration with this flavor of network file system sharing.


As systems evolve over the next few years, the Kerberos security environment is expected to become the standard host security model. Microsoft is building Windows NT version 5.0 to utilize Kerberos as its basic security infrastructure. When version 5.0 of Windows NT becomes available, it is anticipated that it will be possible to mix and match distributed Kerberos-based security trees between UNIX and Windows NT-based hosts using the specifications that have been written into the standard to support this. For now we will defer this integration discussion until the beta 2 release of the Windows NT 5.0 server is available.

Network Information Service (NIS)

The network information service (a.k.a. - NIS+, Yellow Pages) environment is a scalable distributed security solution developed by Sun with implementations available for most of the mainstream flavors of Unix. It attempts to overcome many of the limitations of an /etc/passwd type of security environment at the expense of being vendor specific. Since Kerberos is expected to be a vendor-neutral security solution we will defer investigation of integration with the NIS+ environment until such time as both hosts are running Kerberos. In the interim, the ability to place LDAP interfaces on top of the yellow pages directory allows one to integrate into any of the Microsoft active directory environments, as the current ADSI services do incorporate an LDAP provider interface, allowing applications and services that recognize ADS to leverage the information kept in an LDAP-exposed environment.

Additional Resources

More information about managing Windows NT-based systems in a lights-out environment is appearing on daily. Not only should managers visit the site on a regular basis for news, updates, links, and ideas from Microsoft, but also sites from vendors such as Intel (, Compaq (, Seattle Labs (— in particular the pages, which provide online documentation on the use of Seattle Lab's RemoteNT administration product), and others.


Microsoft Windows NT Server and the suite of services running on it today are well-prepared to meet the "lights out operation" requirements of network operators with distributed infrastructures. The information and the models in this guide, in conjunction with the sample materials provided in the download, indicate that most of the issues surrounding remote management of Windows NT Server-based systems are addressable today. The operating system itself is continuously evolving through direct Microsoft activities, and many ISV add-ons provide easier and more readily obvious mechanisms for enabling the features required in these types of environments. As Microsoft moves toward the release of Windows NT version 5.0, support for "lights out operation" will only improve.

Appendix A

Hardware I/O Calculations

When scaling a server it is always important to consider the maximum amount of I/O that the system is capable of supporting. Theoretical calculations are listed below; they must be used in conjunction with empirical results derived from your actual infrastructure to arrive at true production values, since bus arbitration, drivers, and protocol overheads have proven to reduce these values. This initiative, called I20, is meant to further progress and the I/O bandwidth capabilities on Intel based architectures.

Disk I/O rates

For best performance, configure systems with dual-peer PCI bus-based SCSI controllers and stripe the data across both controllers' drive arrays.


= 20MBs burst * 2 controllers * 8 bits/Byte


= 320Mbs - % overhead in disk drivers & sustained rate factor

Ultra SCSI

= 40MBs burst * 2 controllers * 8 bits/Byte


= 640Mbs - % overhead in disk drivers & sustained rate factor

Adapter Bus I/O Rates


32 bit, 33 MHz
= 8B/transfer * 33.3MHz transfer clock / 1 clocks/transfer * 8 bits/Byte = 2.131Gbs


64 bit, 33 MHz
= 16B/transfer * 33.3 MHz * 8 bits/Byte = 4.262Gbs


64 bit, 66 MHz
= 16B/transfer * 66.6 MHz * 8 bits/Byte = 8.524Gbs


= 8B/transfer * 8.33MHz transfer clock / 1 clocks/transfer * 8 bits/Byte = 533.12Mbs


= 12Mbs

IEEE P1394

= 400Mbs and up


= 133Mbs


= Same as PCI

RAM I/O Rates

Vendor specific


= 32B/transfer * 66.6MHz transfer clock / 12 clocks/transfer *8 bits/Byte = 1.421Gbs

Processor I/O Rates

Pentium Pro

= controlled by the speed of the transfer clock
= 32B/transfer * 66.6MHz transfer clock / 4clocks/transfer * 8 bits/Byte = 4.262Gbs

Pentium II 66

= same as Pentium Pro

Pentium II 100, Xeon

= 8B/transfer * 100 MHz * 8 bits/Byte = 6.4Gbs

Network I/O Rates

For best performance, configure the system with dual-peer PCI bus-attached network adapters and enable some form of round robining for distributing the client request loads and logical/virtual subnet homing of each network adapters for distributing of server response loads. For these calculations we will assume that 4 network adapters per server is the optimal (or maximal) configuration setup—though many high end systems can support more.

Switched 10 Base

T = 10 Mbs * 4 NIC/Server
= 40 Mbs % overhead in packet and protocol

Switched 100 Base

T = 100 Mbs * 4 NIC/Server
= 400 Mbs % overhead in packet and protocol

Given the maximum allowable values for each of the areas that contribute to a platforms I/O streaming capabilities, it can be anticipated that either the network adapters or the storage system will limit the I/O capabilities of a common Intel-based platform. The importance of storage system transfer rates can be reduced, if enough RAM is added to the system. This enables the system to cache data that it would otherwise have to fetch from the storage subsystem. As mentioned earlier, this method works to circumvent storage system I/O bottlenecks, but only when working with services that support the use of file system cache.

If storage subsystem transfer bottlenecks can be minimized through caching, network I/O rates become the limiting factor. As the calculations above indicate, a four-adapter 100 Base-T configuration should be able to deliver 400Mbs – % overhead for packet and protocols. Generally speaking, when using switched network topologies one can achieve very close to wire speeds in performance from any given switch-connected adapter. The % packet/ protocol overhead in switch network environments is negligible. Depending on the vendor implementations of the switch gear, NICs, and drivers one can come very close to sustaining optimal wire speeds. For the purposes of calculation, 80 – 90 percent of calculated I/O rates would suffice for deciding on a systems given streaming capabilities. In this example, 400Mbs * 80% = approximately 300 Mbs.

Empirically-derived results must inform the numbers used in planning the I/O aspects of scaling a service offering. In production environments the overheads associated with bus arbitration, operating system drivers, and protocols often substantially influence the actual I/O values one can expect from a given hardware/operating system/network combination. The theoretical calculations can suggest what one should be striving to achieve with a configuration as well as what one might expect in the future as hardware and operating system vendors work to remove the overhead inherent in their solutions.

Streaming I/O Calculations

Two issues need to be considered when scaling a streaming service. The first is the actual bit rate (ABR) that a client system is capable of receiving, given its connection to the server streaming the content. The second is the number of clients a given hardware/operating system combination can serve, given the ABR of the content that has been encoded for delivery to the clients. To resolve the latter of these two issues refer back to the section on hardware/operating system I/O capacities starting. Note, however, that these steps will only aid creating an initial estimate of the type and number of servers required to support a streaming service. As previously stated, empirical testing before going into production will more accurately characterize the environment in which the service will run. See the section beginning for a discussion of capacity planning tools such as InetMonitor, which can help characterize the environment.

Let us now discuss the first issue: arriving at the actual bit rate values for clients receiving the streaming content.

Actual Bit Rates

Actual bit rates are governed by the encoder of the content and are heavily dependant on the display resolution, frames per second, scene content of the raw video source, and the compression/decompression service (codec) applied to the content. A standard National Television Standards Committee (NTSC) TV channel broadcast is delivered with 640x480 display resolution, using Red/Green/Blue (RGB/24bit) color and 30 frames per second (fps) with no compression. To calculate the stream rate required to deliver this uncompressed video in digital format, convert the bytes it takes to transfer the video digitally, as the following calculation indicates:

640 x 480 (width x height display resolution) = 307,200 pixels

x 3 (bytes[24bit]/pixel color) = 921,600 bytes

x 30 (frames/second) = 2,7648,000 bytes / second

* 8 bits/byte = 221,184,000 bits / second

/ 1,000,000 bit/Mbit = 221.184 Mb / second [Mbs]

This value changes again when an encoder service is used to apply a codec to compress the video stream before storing it for delivery or sending it out via a live broadcast. One must take into account the amount of motion present in the video sample, as this determines the effectiveness of a given codec when compressing the analog video for digital storage. Sports video, for instance, tends to have much more motion than a newscast.

Different codecs compress video in different ways. Intra-frame coding looks for redundancies within a given frame; inter-frame coding uses motion compensation. Motion estimation is accomplished by using I, P, and B frames where I = actual rendered frame (a key or reference frame), P = previous frame, and B = bi-directional frame. Techniques such as discrete cosine transforms and quantization tables allow for the removal of intra-frame redundancies. Techniques such as frame differencing (CinePak) and motion estimation (MPEG 1/2/4, h.263, Vxtreme codecs, etc.) exploit the deltas derived from the I, P, and B frames using motion vector algorithms to come up with motion compensation.

In the digital settop box activities that have been underway for several years cable operators have used quadrature amplitude modulation (QAM) to take digital data and then compress and transmit it over their broadcast frequencies. QAM can deliver 30Mbs in 6MHz of broadcast frequency, which is equivalent to the frequency range that one analog broadcast NTSC channel consumes today. Therefore, if we were to take one of these average NTSC un-compressed digital video streams of 221.184Mbs and try to compress it down to 30Mbs we derive that we would need to set our codec for a 222.184Mbs/30Mbs = 7.37 to 1 compression rate. Any codec such as MPEG1, JPEG, etc. could handle this compression rate easily while retaining the quality of the original source video. Cable modems for PCs achieve 30Mbs of digital downstream bandwidth capabilities using this same QAM model for taking digital data and feeding it out onto the cable broadcast network.

Codecs such as the MPEG4 V2 probably scale well up to 1000:1, suggesting that one could take an NTSC un-compressed video and compress it down to fit into a 28.8kbs stream, but the resulting quality would be unsatisfying. Encoders will often make compromises when delivering video over the Internet—in the form of reducing the display size and frames per second of the video stream. Today, public network sites with streaming video content frequently provide only a 100x100 display, with a 10fps stream. On an internal network or extranet, where network delivery rates are assumed to be better and more consistent. content sites may more often provide 320x240 display at 15fps. An output of this nature usually provides video quality and image size that is not noticeably different than a standard TV broadcast channel. On very fast and reliable locally-switched networks, the opportunity exists, given an appropriate codec, to deliver a full NTSC video stream at 640x480 display and 30fps, but this is usually not necessary.

Keep in mind always that the content creator / encoder can determine the actual bit rate for a Internet-based digitized video stream. The Microsoft Windows NT NetShow™ Services encoder service allows one to pick from a set of template settings for common types of connected clients—or to use custom settings of your own—to produce the desired ABR for a given piece of content. The Microsoft NetShow streaming service then delivers this content to your clients at the ABR you have configured.

Actual bit rates capabilities for client receiving the encoded stream are as follow. Note that these figures attempt to take into account the overheads present within the given type of network connectivity.

Analog modem = approximately 22kbs

Single channel ISDN = approximately 50kbs

Dual channel ISDN = approximately 100kbs

Intranet 10 Base-T [or cable modem] = approximately 300kbs

Intranet switched 10 Base-T = approximately 8Mbs

Intranet 100 Base-T = approximately 30Mbs

Intranet switched 100 Base-T = approximately 80Mbs

Since a full NTSC piece of content can generally be processed by an encoder and delivered at < 2Mbs, full-screen TV-quality video can easily be delivered in any environment with intranet switched 10 Base-T or better. This is why codecs such as MPEG4 V2 have maximum ABR settings of 1 – 1.5Mbs, since any thing greater is not necessary to deliver high end video.

What follows are examples of some of the templated settings available in the NetShow encoder. They suggest some of the parameters used to achieve some of the most common client ABRs.

28.8kbs [22kbps actual] Presentation Video

8fps, 176x144, video quality rating = 30%

Avg. Compression Ration = 8x176x144x3 x (8/1000) / (20) = 243:1

56kbs [50kbps actual] Presentation Video

15fps, 240x176, video quality rating = 75%

Avg. Compression Ration = 15x240x176x3 x (8/1000) / (50) = 304:1

128kbs [100kpbs actual] Presentation Video

15fps, 320x240, video quality rating = 75%

Avg. Compression Ration = 15x320x240x3 x (8/1000) / (50) = 277:1

512kbs [LAN connections] Presentation Video

15fps, 320x240 [full screen], video quality rating = 100%

Avg. Compression Ration = 15x320x240x3 x (8/1000) / (512) = 54:1

An example of matching these ABRs with hardware/operating system I/O capabilities, it is possible to create scenarios in which a Pentium-based system, with a non-switched 100 Base-T connection and an expected sustained network bandwidth of 25 - 30Mbs, should be able to deliver as many as 1,200 28.8kbps presentation video streams (27Mbs/22kbs = 1,200). The limiting factor in the number of streams this server could support is the network configuration itself. As has been mentioned before, one must set up a test environment in order to understand and account for the overheads in an actual production infrastructure.