Chapter 1 - Windows NT Networking Architecture

Archived content. No warranty is made as to technical accuracy. Content may contain URLs that were valid when originally published, but now link to sites or pages that no longer exist.

The Microsoft Windows NT operating system was designed and built with fully integrated networking capabilities. These networking capabilities differentiate Windows NT from other operating systems, such as MS-DOS, OS/2, and UNIX, in which network capabilities are installed separately from the core operating system.

This chapter introduces the Windows NT networking architecture. It provides you with descriptions of the following topics.

  • The design goals and rationale for the Windows NT operating system.

  • The basic components of the Windows NT operating system architecture.

  • The basics of networking architecture in general. This includes a detailed description of the model on which Windows NT was designed, as well as the industry standards and specifications.

  • The Windows NT vertical layers and the interfaces for communication between layers.

  • The Windows NT network protocols, which enable layers on two different computers to communicate with each other.

  • Distributed processing of applications across the network and the mechanisms Windows NT uses to create connections between servers and workstations.

  • The mechanisms for sharing resources across the network, including Multiple Universal Naming Convention Provider (MUP) and Multi-Provider Router (MPR).

  • The workstation and server services.

  • How binding options work, enabling communications between network layers.

  • How Remote Access Service (RAS) works to connect remote or mobile clients to corporate networks.

  • How Services for Macintosh are built into Windows NT, allowing Apple Macintosh clients to connect to a Windows NT Server as if it were any other AppleShare server.

Windows NT Operating System Design and Basics

Two primary forces shaped the design of the Windows NT operating system: market requirements and prudent, vigorous design.

Microsoft customers around the world provided the market requirements. Customers wanted the following features.

  • Portability across families of processors, such as the Intel x86 line

  • Portability across different processor architectures, such as complex instruction set computing (CISC), such as the Intel x86 processors, and reduced instruction set computing (RISC), such as MIPS, DEC, and PowerPC

  • Transparent support for single-processor and multiprocessor computers

  • Support for distributed computing

  • Built-in networking

  • Industry standards compliance, such as POSIX

  • Certifiable security, such as C2, Functional C2, and E3

Leading-edge thinkers in operating system theory and design developed the design goals, complementing the market requirements. The following features have been built into the Windows NT design.

  • Extensibility, or modularity of Windows NT. The modular design allows Microsoft to add new modules to all levels of the operating system without compromising its existing stability.

  • Portability, or the ability of Windows NT to run on both CISC and RISC processors.

  • Scalability, or the ability to take full advantage of symmetric multiprocessing hardware.

  • Reliability and robustness, which means that the architecture protects the operating system and its applications from damage. Applications run in their own processes and cannot read or write outside of their own address space. The operating system, in the kernel, is isolated from applications, which interact with the kernel using only well-defined user-mode application programming interfaces (APIs).

  • Performance, or speed of activity. By running its high-performance subsystems in kernel mode where they interact with the hardware and with each other without thread and process transitions, Windows NT 4.0 improves performance, particularly for graphics-intensive applications, such as Microsoft PowerPoint®, by as much as 20 percent.

  • Compatibility, which means that Windows NT 4.0 continues to support MS-DOS, OS/2, Windows 3.x, and POSIX applications, as well as the FAT file system and a wide variety of devices and networks.

Windows NT continues to blend together real-world experience in operating systems with some of the best ideas from the computing industry and academia on operating system theory.

Open Systems and Industry Standards

Open Systems are systems designed to incorporate all devices regardless of manufacturer and accept third party add-on hardware or software products. Industry standards fall into two categories: de jure and de facto.

De jure standards have been created by standards bodies, such as the American National Standards Institute (ANSI), the Institute of Electrical and Electronic Engineers (IEEE), and the International Standards Organization (ISO). For example, the ANSI American Standard Code for Information Interchange (ASCII) character encoding standard, the IEEE Portable Operating System Interface for UNIX (POSIX) standard, and the ISO Open System Interconnection (OSI) reference model for computer networks are all de jure standards.

De facto standards have been widely adopted by industry but are not endorsed by any of the standards bodies. An example of a de facto standard is the Transmission Control Protocol/Internet Protocol (TCP/IP) network communications protocol. De facto standards exist either to fill gaps left by the implementation specifications of the de jure standards or because no standard currently exists for the particular area.

Open systems based solely on de jure industry standards do not yet exist and probably never will because of the very different natures of the computer industry and the academic-standards process. The speed with which the technology changes is staggering, and the formal standards process can't keep pace with it. Various composites of system standards exist because the market demands that solutions be implemented immediately. In today's open systems, both de facto and de jure standards are combined to create interoperable systems. It is the strategic combination of both types of standards that enables open systems to keep pace with rapidly changing technology.

One key element of this middle-of-the-road approach is the use of strategically placed layers of software, to allow the adjoining upper and lower layers of software within the operating system to provide different functions that work together. These software layers provide a standardized set APIs to the software layers above and below themselves. A good example is the Network Device Interface Specification (NDIS), which was jointly developed by Microsoft and 3Com in 1989. Another example is the Desktop Management Interface (DMI) created by the Desktop Management Task Force (DMTF), an industry organization with more than 300 vendor members, including Microsoft, Intel, IBM, and Novell.

The benefit of this architecture is that it allows software modules above and below the layer to be substituted for software modules developed using the same standards. This means you can start out with a module that implements a de facto standard and later supplement or replace it with one that implements a de jure standard. In effect, you end up with the best of both worlds — open systems and industry standards.

Client/Server Computing

The Windows NT operating system is designed for client/server computing. Client/server computing generally means connecting a single-user, general-purpose workstation (client) to multiuser, general-purpose servers, with the processing load shared between both. The client requests services, and the server responds by providing the services.

The Windows NT operating system also extends the client/server model to individual computers. For example, the user runs applications, which are clients that request services from the protected subsystems, which are servers. The idea is to divide the operating system into several discrete processes, each of which implements a set of cohesive services, such as process-creation or memory-allocation. These processes communicate with their clients, with each other, and with the kernel component of the server by passing well-defined messages back and forth.

The client/server approach results in a modular operating system. The servers are small and self-contained. Because each runs in its own protected, user-mode process, a server can fail without taking down the rest of the operating system. The self-contained nature of the operating-system components also makes it possible to distribute them across multiple processors on a single computer (symmetric multiprocessing) or even across multiple computers on a network (distributed computing).

Object-Based Computing

Software objects are a combination of computer instructions and data that models the behavior of things, real or imagined, in the world. Objects are composed of the following elements.

  • Attributes, in the form of program variables, which collectively define the object's state

  • Behavior, in the form of code modules or methods that can modify those attributes

  • An identity that distinguishes one object from all others

Objects interact with each other by passing messages back and forth. The sending object is known as the client and the receiving object is known as the server. The client requests, and the server responds. In the course of conversation, the client and server roles often alternate between objects. Windows NT is not an object-oriented system in the strictest sense, but it does use objects to represent internal system resources.

Windows NT uses an object metaphor that is pervasive throughout the architecture of the system. When viewed using Windows NT, all of the following appear as ordinary objects.

  • Devices, such as printers, tape drives, keyboards, and terminal screens

  • Processes and threads

  • Shared memory segments

  • Access rights

Multitasking and Multiprocessing

Multitasking and multiprocessing are closely related terms that are easily confused. Multitasking is an operating-system technique for sharing a single processor among multiple threads of execution. Multiprocessing refers to computers with more than one processor. A multiprocessing computer can execute multiple threads simultaneously, one thread for each processor in the computer. A multitasking operating system only appears to execute multiple threads at the same time; a multiprocessing operating system actually does so.

Multiprocessing operating systems can be either asymmetric or symmetric. The main difference is in how the processors operate. In asymmetric multiprocessing (ASMP), the operating system typically sets aside one or more processors for its exclusive use. The remainder of the processors run user applications. In symmetric multiprocessing (SMP), any processor can run any type of thread. The processors communicate with each other through shared memory. The Windows NT operating system is an SMP system.

SMP systems provide better load-balancing and fault tolerance. Because the operating system threads can run on any processor, the chance of hitting a CPU bottleneck is greatly reduced. A processor failure in the SMP model will only reduce the computing capacity of the system. In the ASMP model, if the processor that fails is an operating system processor, the whole computer can go down.

SMP systems are inherently more complex than ASMP systems. A tremendous amount of coordination must take place within the operating system to keep everything synchronized. For this reason, SMP systems are usually designed and written from the ground up.

Kernel and User Mode

In modern operating systems, applications are kept separate from the operating system itself. The operating-system code runs in a privileged processor mode known as the kernel, and has access to system data and hardware. Applications run in a nonprivileged processor mode known as user mode, and have limited access to system data and hardware through a set of tightly controlled APIs.

One of the design goals of the Windows NT operating system was to keep the base operating system as small and efficient as possible. This was accomplished by allowing only those functions that could not reasonably be performed elsewhere to remain in the base operating system. The functionality that was pushed out of the kernel ended up in a set of nonprivileged servers known as the protected subsystems. The protected subsystems provide the traditional, operating-system support to applications through a feature-rich set of APIs.

This design results in a very stable base operating system. Enhancements occur at the protected subsystem level. New protected subsystems can even be added without modification to either the base operating system or the other existing protected subsystems.

Executive

The executive is the kernel-mode portion of the Windows NT operating system and, except for a user interface, is a complete operating system unto itself. The executive is never modified or recompiled by the system administrator.

Cc767152.xng_a01(en-us,TechNet.10).gif

Figure 1.1: Windows NT operating system architecture

The executive is actually a family of software components that provide basic operating-system services to the protected subsystems and to each other. The executive components are listed below.

  • I/O Manager

  • Object Manager

  • Security Reference Monitor

  • Process Manager

  • Local Procedure Call Facility

  • Virtual Memory Manager

  • Window Manager

  • Graphics Device Interface

  • Graphics Device Drivers

The executive components are completely independent of one another and communicate through carefully controlled interfaces. This modular design allows existing executive components to be removed and replaced with ones that implement new technologies or features. As long as the integrity of the existing interface is maintained, the operating system runs as before. The top layer of the executive is called the System Services, which are the interfaces between user-mode protected subsystems and kernel mode. For details on the executive and its components, see Chapter 1, "Windows NT Architecture," in the Microsoft Windows NT Workstation 4.0 Online Resource Guide.

Protected Subsystems

The protected subsystems are user-mode servers that are started when Windows NT is started. There are two types of protected subsystems: integral and environment. An integral subsystem is a server that performs an important operating system function, such as security. An environment subsystem is a server that provides support to applications written for or native to different operating system environments, such as OS/2.

Windows NT currently ships with three environment subsystems: the Win32® subsystem, the POSIX subsystem, and the OS/2 subsystem.

The Win32 (or 32-bit Windows) subsystem is the native subsystem of Windows NT. It provides the most capabilities and efficiencies to its applications and is the subsystem of choice for new software development. The POSIX and OS/2 subsystems provide compatibility environments for their respective applications and are not as feature-rich as the Win32 subsystem.

Basic Concepts of Network Architecture

Networking software must perform a wide range of functions to enable communications among computers. Some of these functions are listed below.

  • Device and I/O redirection

  • Process address registration

  • Interprocess connection

  • Password encryption and decryption

  • Message segmentation and desegmentation

  • Frame routing between networks

  • Frame delimiting and media-access arbitration

  • Pulse encoding of bits

To reduce the design complexity of a network, these functions are organized into groups, which are then allocated to a series of layers. The purpose of each layer is to offer services to the other layers, shielding the layers from the details of how the offered services are actually implemented. The services provided by a particular layer are a product of the network functions allocated to that layer and are usually built upon services offered by other layers. The design of the set of layers and of how they function with each other constitutes a network architecture.

Figure 1.2: Layered design of network services

Figure 1.2: Layered design of network services

Communication Between Layers

Communication between layers within a computer is handled differently from communication between two computers. The layers within a computer communicate with each other using vertical interfaces. The layers on different computers communicate with their counterparts using protocols.

Peer Relationships—Protocols

Peer-to-peer communications are performed using protocols. For example, layer 4 on one computer carries on a conversation with layer 4 on another computer. The rules and conventions used in this conversation are collectively known as the layer-4 protocol. The communication between the layers is considered peer-to-peer communication. Functions performed in layer 4 of one computer are communicated to layer 4 of another computer.

Vertical Relationships—Interfaces

Each layer ultimately communicates with its peer on the other computer. However, no data passes directly from layer 4 on one computer to layer 4 on another. Instead, each layer passes data and control information to the layer immediately below it, until the lowest layer is reached and the data is transmitted onto the network media. The receiving computer then passes the data and control information from layer to layer until it reaches its own layer 4.

There is a well-defined interface between each pair of layers. The interface defines which services the lower layer offers to the upper one and how those services will be accessed.

Transmitting and Receiving Data Across a Network

When two computers transmit data over the network, one is a transmitting or sending computer and one is a receiving computer. Data is passed in frames, which are messages broken into smaller units with transport headers attached. To understand how frames are transferred through a network, we need to look at both ends of the transfer process: transmitting and receiving.

Transmitting

Data frames are formed whenever the sending computer initiates a request for communication. Frame formation begins at the highest layer and continues down through each successive layer. The protocol at each layer adds control information (in the form of headers and trailers) to the data that was passed down from the layer above. The frame is then passed to the layer below according to the definition of the interface. Eventually, the data passes through all layers of the protocol stack and is transmitted onto the network media.

Receiving

At the receiving end, the frame is passed from the lower layers to the higher layers in accordance with the definition of the interfaces. The protocol at each layer interprets only the information contained in the headers and trailers that were placed there by its peer on the transmitting end. The protocol considers the rest of the frame to be the data unit, which it is responsible for delivering to the layer above it.

The Open Systems Interconnect Model

In the early years of networking, sending and receiving data across a network was confusing because large companies, such as IBM, Honeywell, and Digital Equipment Corporation had individual standards for connecting computers. The transmit and receive processes had to "talk" to the same protocols to communicate. It was unlikely that applications operating on different equipment from different vendors would be able to communicate. Vendors, users, and standards bodies needed to agree upon and implement a standard architecture that would allow computer systems to exchange information even though they were using software and equipment from different vendors.

In 1978, the International Standards Organization (ISO) introduced a model for Open Systems Interconnect (OSI) as a first step toward international standardization of the various protocols required for network communication. This ISO OSI model incorporates the following qualities.

  • It is designed to establish data-communications standards that promote multivendor interoperability.

  • It consists of seven layers, with a specific set of network functions allocated to each layer and guidelines for implementation of the interfaces between layers.

  • It specifies the set of protocols and interfaces to implement at each layer.

OSI Layers

Each layer of the OSI model exists as an independent module. In theory, you can substitute one protocol for another at any given layer without affecting the operation of layers above or below, although you probably wouldn't want to do so.

The principles used to create the seven-layer model are listed below.

  • A layer should be created only when a different level of abstraction is required.

  • Each layer should perform a well-defined function.

  • The function of each layer should be chosen with the goal of defining internationally standardized protocols.

  • The layer boundaries should be chosen to minimize the information flow across the interfaces.

  • The number of layers should be large enough to enable distinct functions to be separated, but few enough to keep the architecture from becoming unwieldy.

The following diagram shows the numbering of the layers, beginning with the physical layer, which is closest to the network media.

Figure 1.3: Layers of the OSI model

Figure 1.3: Layers of the OSI model

Physical Layer

The physical layer is the lowest layer of the OSI model. This layer controls the way unstructured, raw, bit-stream data is sent and received over a physical medium. This layer describes the electrical or optical, mechanical, and functional interfaces to the physical network medium. The physical layer carries the signals for all of the higher layers.

Data-encoding modifies the simple, digital-signal pattern (1s and 0s) used by the computer to better accommodate the characteristics of the physical medium and to assist in bit and frame synchronization.

Data encoding resolves the following issues.

  • Which signal pattern represents a binary 1

  • How the receiving station recognizes when a "bit-time" starts

  • How the receiving station delimits a frame

Physical medium attachment resolves the following issues.

  • Whether an external transceiver will be used to connect to the medium

  • How many pins the connectors have and what each pin is used for

The transmission technique determines whether the encoded bits will be transmitted by means of baseband (digital signaling or broadband (analog) signaling.

Physical-medium transmission determines whether it is appropriate to transmit bits as electrical or optical signals, based on the following criteria.

  • Which physical-medium options can be used

  • How many volts should be used to represent a given signal state in the specific physical medium

Data-link Layer

The data-link layer provides error-free transfer of data frames from one computer to another over the physical layer. The layers above this layer can assume virtually error-free transmission over the network.

The data-link layer provides the following functions.

  • Establishing and terminating alogical link (virtual-circuit connection) between two computers identified by their unique network interface card (NIC) addresses

  • Controlling frame flow by instructing the transmitting computer not to transmit frame buffers

  • Sequentially transmitting and receiving frames

  • Providing and expecting frame-acknowledgment, and detecting and recovering from errors that occur in the physical layer by retransmitting non-acknowledged frames and handling duplicate frame receipts

  • Managing media access to determine when the computer is permitted to use the physical medium

  • Delimiting frames to create and recognize frame boundaries

  • Error-checking frames to confirm the integrity of the received frame

  • Inspecting the destination address of each received frame and determining if the frame should be directed to the layer above

Network Layer

The network layer controls the operation of the subnet. It determines which physical path the data takes, based on the network conditions, the priority of service, and other factors.

The network layer provides the following functions.

  • Transferring the frameto a router if the network address of the destination does not indicate the network to which the station is attached

  • Controlling subnet traffic to allow an intermediate system to instruct a sending station not to transmit its frame when the router's buffer fills up. If the router is busy, the network layer can instruct the sending station to use an alternate router

  • Allowing the router to fragment a frame when a downstream router's maximum transmission unit (MTU) size is less than the frame size. The frame fragments will be reassembled by the destination station

  • Resolving the logical computer address (at the network layer) with the physical network-interface-card (NIC) address (at the data-link layer), if necessary

  • Keeping an accounting record of frames forwarded by subnet intermediate system to produce billing information

The network layer at the transmitting computer must build its header in such a way that the network layers residing in the subnet's intermediate systems can recognize the header and use it to route the data to the destination address.

This layer eliminates the need for higher layers to know anything about the data transmission or intermediate switching technologies used to connect systems. The network layer is responsible for establishing, maintaining, and terminating the connection to one or to several intermediate systems in the communication subnet.

In the network layer and the layers below it, the peer protocols are between each computer and its immediate neighbor, which is often not the ultimate destination computer. The source and destination computers may be separated by many intermediate systems.

Transport Layer

The transport layer makes sure that messages are delivered in the order in which they were sent and that there is no loss or duplication. It removes the concern from the higher layer protocols about data transfer between the higher layer and its peers.

The size and complexity of a transport protocol depend on the type of service it can get from the network layer or data link layer. For a reliable network layer or data-link layer with virtual-circuit capability, such as NetBEUI's LLC layer, a minimal transport layer is required. If the network layer or data-link layer is unreliable or supports only datagrams (as TCP/IP's IP layer and NWLink's IPX layer do), the transport layer should include frame sequencing and acknowledgment, and associated error-detection and recovery.

Functions of the transport layer include the following tasks.

  • Accepting messages from the layer above and, if necessary, splitting them into frames

  • Providing reliable, end-to-end message delivery with acknowledgments

  • Instructing the transmitting computer not to transmit when no receive buffers are available

  • Multiplexing several process-to-process message streams or sessions onto one logical link and keeping track of which messages belong to which sessions

The transport layer can accept large messages, but there are strict size limits imposed by the layers at the network level and lower. Consequently, the transport layer must break up the messages into smaller units, called frames, and attach a header to each frame.

If the lower layers do not maintain sequence, the transport header (TH) must contain sequence information, which enables the transport layer on the receiving end to present data in the correct sequence to the next higher layer.

Unlike the lower subnet layers, whose protocols are between immediately adjacent nodes or computers, the transport layer and the layers above it are true source-to-destination layers. They are not concerned with the details of the underlying communications facility. Software for layers on the source computer at the transport level and above carries on a conversation with similar software on the destination computer, using message headers and control messages.

Session Layer

The session layer establishes a communications session between processes running on different computers, and can support message-mode data transfer.

Functions of the session layer include:

  • Allowing application processes to register unique process addresses, such as NetBIOS names. It provides the means by which these process addresses can be resolved to the network-layer or data-link-layer NIC addresses, if necessary.

  • Establishing, monitoring, and terminating a virtual-circuit session between two processes identified by their unique process addresses. A virtual-circuit session is a direct link that seems to exist between the sender and receiver: In reality, the connection is established through circuits.

  • Delimiting messages, to add header information that indicates where a message starts and ends. The receiving session layer can then refrain from indicating any message data to the overlying application until the entire message has been received.

  • Informing the receiving application when buffer space is insufficient for the entire message and that the message is incomplete (called message synchronization). The receiving session layer may also use a control frame to inform the sending session layer how many bytes of the message have been successfully received. The sending session layer can then resume sending data at the byte following the last byte acknowledged as received. When the application subsequently provides another buffer, the session layer can place the remainder of the message in that buffer and indicate to the application that the entire message has been received.

  • Performing other support functions that allow processes to communicate over the network, such as user authentication and resource-access security.

Presentation Layer

The presentation layer serves as the data translator for the network. This layer on the sending computer translates data from the format sent by the application layer into a common format. At the receiving computer, the presentation layer translates the common format to a format known to the application layer.

The presentation layer provides the following functions.

  • Character-code translation, such as from ASCII to EBCDIC

  • Data conversion, such as bit order, CR-to-CR/LF, and integer-to-floating point

  • Data compression, which reduces the number of bits that need to be transmitted

  • Data encryption, which renders data unreadable until it has been unencrypted, for security purposes. An example is password encryption

Application Layer

The application layer serves as the window for users and application processes to access network services. The application layer provides the following functions.

  • Resource sharing and device redirection

  • Remote file access

  • Remote printer access

  • Interprocess communication support

  • Remote procedure call support

  • Network management

  • Directory services

  • Electronic messaging, including e-mail messaging

  • Simulation of virtual terminals

Data Flow in the OSI Model

The OSI model presents a standard data flow architecture, with protocols specified in such a way that layer n at the destination computer receives exactly the same object as was sent by layer n at the source computer.

Cc767152.xng_a06(en-us,TechNet.10).gif

Figure 1.4: OSI model data flow

The sending process passes data to the application layer, which attaches an application header (AH) and then passes the frame to the presentation layer.

The presentation layer can transform data in various ways, if necessary, such as by translating it and adding a header. It then gives the result to the session layer. The presentation layer is not "aware" of which portion (if any) of the data received from the application layer is AH and which portion is actually user data, because that information is irrelevant to the presentation layer's role.

The process is repeated from layer to layer until the frame reaches the data-link layer. There, in addition to a header, a data trailer (DT) is added to aid in frame synchronization. The frame is then passed down to the physical layer, where it is actually transmitted to the receiving computer.

On the receiving computer, the various headers and the DT are stripped off one by one as the frame ascends the layers and finally reaches the receiving process.

Although the actual data transmission is vertical, each layer is programmed as if the transmission were horizontal. For example, when a sending transport layer gets a message from the session layer, it attaches a transport header (TH) and sends it to the receiving transport layer. The fact that the message actually passes to the network layer on its own machine is unimportant.

Vertical Interface Terminology in the OSI Model

In addition to defining the ideal seven-layer architecture and the network functions allocated to each layer, the OSI model also defines a standard set of rules and associated terms that govern the interfaces between layers.

The active protocol elements in each layer are called entities , which are typically implemented by means of a software process. For example, the TCP/IP protocol suite contains two entities within its transport layer: Transmission Control Protocol (TCP) and User Datagram Protocol (UDP). Entities in the same layer on different computers are called peer entities.

The layer directly below layer-n entities implements services that will be used by layer n.

For data transfer services, OSI defines the terminology for the discrete data components passed across the interface and between peer entities, as described in the following example.

  • The layer-n entity passes an interface data unit (IDU) to the layer-n–1 entity.

  • The IDU consists of a protocol data unit (PDU) and some interface control information (ICI).

  • The PDU is the data that the layer-n entity wishes to pass across the network to its peer entity. It consists of the layer-n header and the data that layer n received from layer n+1.

  • The layer-n PDU becomes the layer n–1 service data unit (SDU), because it is the data unit that will be serviced by layer n.

  • The ICI is made up of control information, such as the length of the SDU, and the addressing information that the layer below needs to do its job.

  • When layer n–1 receives the layer-n IDU, it strips off and "considers" the ICI, adds the header information for its peer entity across the network, adds ICI for the layer below, and passes the resulting IDU to the layer n–2 entity.

Cc767152.xng_a07(en-us,TechNet.10).gif

Figure 1.5: Vertical interface entities

Problems can occur in the data-flow path between two network stations. These problems can include errant, restricted, or even halted communication. Vertical and peer-trace utilities can be developed by third-party vendors to trace network communication errors.

Vertical Interface Trace Utilities

Layer entities within a computer can call layer entities above and below them by means of established interface-call mechanisms (such as an interrupt in MS-DOS or, in Windows NT, an API-function call or IRP submission)and then pass a defined IDU structure. These call mechanisms provide the means to write a trace utility, which can do the following items.

  • Capture the interface-call-mechanism entry point, saving the original entry point.

  • Gain control when the entry point is called.

  • Examine the structure being passed, "snapshot" all or part of the IDU structure, and then write the snapshot to a buffer or log file.

  • Pass control to the original entry point.

If the data-flow problem is due to a layer entity's passing incorrect or incorrectly formatted ICI information, an examination of the log generated by the interface trace utility should reveal the cause of the problem. Vertical-interface trace utilities that can be used to troubleshoot networking include the NBTRAP (NetBIOS interface trace) utility for MS-DOS and the TDITRACE (Transport Driver Interface interface trace) utility for Windows NT, among others.

Peer-protocol Trace Utilities

A specially configured computer can connect to the physical medium to receive and examine all frames sent to and from specified network addresses. The user can set the computer software to display frame-header information at any functional layer. The user can then view peer-protocol conversations between selected computers. If the data-flow problem is due to an error in the peer protocol, the user can detect it by examining the trace. Peer-protocol trace utilities include Sniffer from Network General and Microsoft Network Monitor, among others.

Cc767152.xng_a08(en-us,TechNet.10).gif

Figure 1.6: Troubleshooting, using a Data Flow Trace

IEEE Standard 802 for Low-level Protocols

Recognizing a need for standards in the local area network (LAN) market, the IEEE undertook Project 802. Named for the year (1980) and month (February) of its inception, Project 802 defines a family of low-level protocol standards at the physical and data-link layers of the OSI model.

Under the terms of IEEE 802, the OSI data-link layer is further divided into two sublayers: Logical Link Control (LLC) and Media Access Control (MAC).

Figure 1.7: Comparison of IEEE layers and the OSI model

Figure 1.7: Comparison of IEEE layers and the OSI model

Data-link-layer functions allocated to the LLC sublayer include the following.

  • Establishing and terminating links

  • Controlling frame traffic

  • Sequencing frames

  • Acknowledging frames

Data-link-layer functions allocated to the MAC sublayer include the following items.

  • Managing media access

  • Delimiting frames

  • Checking frame errors

  • Recognizing frame addresses

Project 802 specifications include the following categories.

  • 802.1 Overview of project 802, including higher layers and internetworking

  • 802.2 Logical Link Control (LLC)

  • 802.3 Carrier Sense Multiple Access with Collision Detect (CSMA/CD)

  • 802.4 Token Bus

  • 802.5 Token Ring

  • 802.6 Metropolitan Area Network

  • 802.7 Broadband Technology Advisory Group

  • 802.8 Optical Fiber Technology Advisory Group

  • 802.9 Voice/Data Integration on LANs

  • 802.10 Standard for Interoperable LAN Security

The low-level protocol specifications 802.3 CSMA/CD, 802.4 Token Bus, and 802.5 Token Ring differ at the physical layer and MAC sublayer, but are compatible at the LLC sublayer.

The 802 standards have been adopted by the following standards bodies.

  • ANSI, as American National Standards

  • National Bureau of Standards (NBS), as government standards

  • ISO, as international standards (known as ISO 8802)

ANSI FDDI Specification

Closely related to the IEEE 802 standards is a more recently developed low-level protocol standard, Fiber Distributed Data Interchange (FDDI). FDDI was developed by ANSI and is based on the use of fiber-optic cable.

FDDI differs from the IEEE 802 standards at the physical layer and MAC sublayer, but is compatible with the IEEE standards at the LLC sublayer.

Data-transfer Services

Protocol entities within a network architecture provide various types of data-transfer services from a layer to the layers above it. The most prevalent data-transfer services are called reliable connection and unreliable connectionless.

A connection service requires a virtual circuit or connection to be established from the source computer to a single destination computer before any data transfer can take place. A connection acts like a tube: The sender pushes objects in at one end, and the receiver takes them out at the other end, in proper sequence. Because sequencing is provided, a message that is larger than the maximum transmit-frame size can be broken down into multiple frames, sent across the network, and re-sequenced at the receiving computer.

A connectionless service requires no initial connection and offers no guarantee that the data units will arrive in sequence. No connection is required, so messages can be sent to one or multiple destination stations. No sequencing is provided, so messages can be sent only in single-frame size.

A reliable service never loses data because the receiver acknowledges receipt of all data sent. If the sender doesn't receive acknowledgment, it re-sends.

With an unreliable service, no acknowledgments are sent, so there is no guarantee that data sent was ever received.

Microsoft network products require that the underlying network drivers provide both reliable-connection and unreliable-connectionless data-transfer services.

Microsoft network products use unreliable-connectionless data transfer only when there is a need to send data simultaneously to many stations. Often, unreliable-connectionless data transfer is used to locate the name of a computer. Once the computer name is received, reliable-connection data transfer is used to connect to the computer and complete the desired transaction. For example, an unreliable-connectionless data transfer may be sent to all computers in a domain (such as net send /d:<domain name> "hello") to find the name of a server that provides a particular service.

Data-transfer Modes

Different protocol entities offer different modes by which data can be transferred across the network from one process to another. As indicated in Table 1.1, some protocols, such as the NetBEUI NBFP, support more than one data-transfer mode.

Table 1.1 Data-transfer Mode Definitions and Protocols

Data-transfer mode

Mode type

Definition

Protocol

Reliable connection

Message mode

Message delimination and synchronization

NetBEUI NBFP, TCP/IP NetBT, NWLink NBIPX

 

Byte-stream mode

Byte granular data sequencing and acknowledgment

TCP/IP
TCP

 

Packetstream mode

Packet granular data sequencing and acknowledgment

NetBEUI, LLC, MSDLC, NWLink SPX

Unreliable connectionless

Datagram mode

 

NetBEUI NBFP and LLC;MSDLC, TCP/IP UDP and IP; NWLink IPX

The following sections discuss the attributes of various data-transfer modes.

Reliable Connection Message Mode

When the sending client submits a larger-than-packet-size message to be sent, the sending protocol entity breaks the message into frame-sized segments. These include message-delimiter information in the protocol header, which identifies where the client submitted message starts and ends. This process allows the receiving-protocol driver to receive the entire client message before indicating the data to the receiving client.

If the receiving-client buffer fills up before the receiving-station protocol entity has received the entire message, it will still provide the partial message to the receiving client. It will also indicate that the data provided is a partial message and that the receiving client must supply another buffer to receive the remaining portion of the message.

When the receiving-protocol entity has received the entire message, it returns a message to the sending-protocol entity, acknowledging receipt of the entire message.

Reliable Connection Byte-stream Mode

When the sending client submits a larger-than-packet-size message, the sending-protocol-driver entity breaks the message into segments but does not include message-delimiter information in the protocol header. The receiving-station-protocol entity provides the data to the receiving client when the receiving client provides the buffers to receive it, without any regard to the original message size.

When the receiving-protocol entity provides the data to the receiving client, it returns a byte acknowledgment to the sending-protocol entity, acknowledging receipt of all data up to a specified byte. The sending client can then begin sending the message at the last byte acknowledged.

Reliable Connection Packetstream Mode

In this mode, the sending client can submit only packet-size messages. The receiving protocol provides the packet-by-packet data in sequence to the receiving client.

While the receiving protocol entity provides a frame of data to the receiving client, it returns a message to the sending protocol entity, acknowledging receipt of all packets up to the specified packet number.

Note: The Windows Sockets emulator emulates message-mode data transfer over the NWLink SPX packet stream. The Sockets emulator component will break large messages into packets and place message delimiters in the packet stream. The receiving Sockets emulator will not provide the received data to the receiving Sockets client until the entire message has been received.

Unreliable Connectionless Datagram Mode

The sending client can submit only packet-size messages to be sent. If the data unit received is larger than the receiving client's next receive buffer, then as much of the received data unit as will fit is placed in the receiving client's buffer and provided to the client. The portion of the received data unit that could not fit in the client buffer is simply lost; no associated error is returned to the receiving client. If the data unit received is smaller than the client buffer, the protocol entity will place the received data unit in the client buffer and immediately provide the message, without waiting for more data to be received.

No acknowledgment is returned to the sending-protocol entity.

The following diagram illustrates the data transfer process: 'S' (on the right side of the diagram) indicates a SUCCESS return status code on the receive. 'E' indicates an error status on the receive: ERROR_MORE_DATA on message mode data transfer.

Cc767152.xng_a11a(en-us,TechNet.10).gif

Figure 1.8: Data-transfer modes

The Windows NT Layered Network Architecture

A significant difference between the Windows NT operating system and other operating systems is that the networking capabilities were built into the system from the ground up. With MS-DOS, Windows 3*.x*, and OS/2, networking was added to the operating system.

By providing both client and server capabilities, a computer running Windows NT Server or Windows NT Workstation can be either a client or server in either a distributed-application environment or a peer-to-peer networking environment.

The following illustration shows the specific way Windows NT implements the OSI model.

Cc767152.xng_a12(en-us,TechNet.10).gif

Figure 1.9: Windows NT layered network architecture

To understand the Windows NT operating system capabilities, it is important to understand the network architecture. The networking architecture with its layered organization provides expandability by allowing other functions and services to be added.

The rest of the chapter is devoted to an explanation of the concepts and features of the Windows NT layered networking architecture, including the following topics.

  • Boundary layers

  • Network protocols

  • Streams

  • Distributed processing

  • Distributed component object model

  • Network resource access

  • Workstation service

  • Server service

  • Binding options

  • Remote Access Service

  • Services for Macintosh

Boundary Layers

A boundary is the unified interface between the functional layers in the Windows NT network architecture model. Creating boundaries as breakpoints in the network layers helps open the system to outside development, making it easier for outside vendors to develop network drivers and services. Because the functionality that must be implemented between the layers is well defined, developers need to program only between the boundary layers instead of going from the top to the bottom. Boundary layers also enable software developed above and below a level to be integrated without rewriting.

There are two significant boundary layers in the Windows NT operating system network architecture: the Network Driver Interface Specification (NDIS) 3.0 boundary layer and TDI boundary layer.

The NDIS 3.0 boundary layer provides the interface to the NDIS wrapper and device drivers.

Figure 1.10: Windows NT boundary layers

Figure 1.10: Windows NT boundary layers

Transport Driver Interface

TDI is a common interface for a driver (such as the Windows NT redirector and server) to communicate with the various network transports. This allows redirectors and servers to remain independent of transports. Unlike NDIS, there is no driver for TDI; it is simply a standard for passing messages between two layers in the network architecture.

Network Driver Interface Specification 3.0

In 1989, Microsoft and 3Com jointly developed a specification defining an interface for communication between the MAC sublayer and protocol drivers higher in the OSI model. Network Driver Interface Specification (NDIS) is a standard that allows multiple network adapters and multiple protocols to coexist. NDIS permits the high-level protocol components to be independent of the network interface card by providing a standard interface. The network interface card driver is at the bottom of the network architecture. Because the Windows NT network architecture supports NDIS 3.0, it requires that network adapter-card drivers be written to the NDIS 3.0 specification. NDIS 3.0 allows an unlimited number of network adapter cards in a computer and an unlimited number of protocols that can be bound to a single adapter card.

In Windows NT, NDIS has been implemented in a module called Ndis.sys, which is referred to as the NDIS wrapper.

The NDIS wrapper is a small piece of code surrounding all of the NDIS device drivers. The wrapper provides a uniform interface between protocol drivers and NDIS device drivers, and contains supporting routines that making it easier to develop an NDIS driver.

Figure 1.11: NDIS Wrapper

Figure 1.11: NDIS Wrapper

Previous implementations of NDIS required a protocol manager (PROTMAN) to control access to the network adapter. The primary function of PROTMAN was to control the settings on the network adapter and the bindings to specific protocol stacks. The Windows NT operating system networking architecture does not need a PROTMAN module because adapter settings and bindings are stored in the registry and configured using Control Panel.

Because the NDIS wrapper controls the way protocols communicate with the network adapter card, the protocols communicate with the NDIS wrapper rather than with the network adapter card itself. This is an example of the modularity of the layered model. The network adapter card is independent from the protocols; therefore, a change in protocols does not require changing settings for the network adapter card.

Network Protocols

The Windows NT operating system ships with four network protocols:

  • Data Link Control (DLC)

  • NetBEUI

  • TCP/IP

  • NWLink (IPX/SPX)

Figure 1.12: Windows NT network protocols

Figure 1.12: Windows NT network protocols

Data Link Control

Unlike the other protocols, the Data Link Control (DLC) protocol is not designed to be a primary protocol for network use between personal computers. DLC provides applications with direct access to the data-link layer but is not used by the Windows NT operating system redirector. Since the redirector cannot use DLC, this protocol is not used for normal-session communication between computers running Windows NT Server or Windows NT Workstation.

The DLC protocol is primarily used for two tasks.

  • Accessing IBM mainframes, which usually run 3270 applications.

  • Printing to Hewlett-Packard printers connected directly to the network.

Network-attached printers, such as the HP III, use the DLC protocol because the received frames are easy to take apart and because DLC functionality can easily be coded into read-only memory (ROM).

DLC needs to be installed only on those network machines that perform these two tasks, such as a print server sending data to a network HP printer. Client computers sending print jobs to the network printer do not need the DLC protocol; only the print server communicating directly with the printer needs the DLC protocol installed.

The registry location for the DLC parameters is:

HKEY_LOCAL_MACHINE \System \CurrentControlSet \Services \DLC

The registry entry for the DLC driver indicates that it is loaded after the kernel has been started (Start 0x1), and it is dependent on having an NDIS group service available. Its linkage shows that it is bound to the network adapter by the appropriate NDIS device driver.

NetBIOS Extended User Interface

IBM introduced NetBIOS Extended User Interface (NetBEUI) in 1985. NetBEUI was developed for small departmental LANs of 20 to 200 computers. It was assumed that these LANs would be connected by gateways to other LAN segments and mainframes.

NetBEUI version 3.0 is included with Windows NT Server and Windows NT Workstation. It features the following advantages.

  • Provides a fast protocol on small LANs

  • Breaks the 254-session barrier of previous versions of NetBEUI

  • Provides much better performance over slow links than previous versions of NetBEUI

  • Is completely self-tuning and has good error protection

  • Uses a small amount of memory

  • Does not require configuring

NetBEUI has two disadvantages.

  • NetBEUI is not routable.

  • NetBEUI performance across WANs is poor.

The registry location for the NetBEUI parameters is:

HKEY_LOCAL_MACHINE \System \CurrentControlSet \Services \NBF

The NetBEUI registry entry looks like the DLC entry. Like the DLC driver, NetBEUI is dependent on having an available NDIS group service. Also, under the linkage key, NetBEUI is bound to the network adapter entry by way of the NDIS device driver entry.

Strictly speaking, NetBEUI 3.0 is not truly NetBEUI because it is not inherently extending the NetBIOS interface. Instead, its upper-level interface conforms to the TDI. However, NetBEUI 3.0 still uses the NetBIOS Frame Format (NBF) protocol and is completely compatible and interoperable with previous versions of NetBEUI.

Network applications speaking directly to the NetBEUI 3.0 protocol driver now must use TDI commands instead of NetBIOS commands. This is a departure from earlier implementations of NetBEUI on MS-DOS and OS/2, which provided the programming interface as part of the transport's device driver. There is nothing wrong with this, but in the Windows NT operating system implementation, the programming interface (NetBIOS) has been separated from the protocol (NetBEUI) to increase flexibility in the layered architecture. Two points summarize the difference between these two.

  • NetBEUI is a protocol.

  • NetBIOS is a programming interface.

Transmission Control Protocol/Internet Protocol

TCP/IP is an industry-standard suite of protocols designed for WANs. It was developed in 1969, resulting from a Defense Advanced Research Projects Agency (DARPA) research project on network interconnection.

DARPA developed TCP/IP to connect its research networks. This combination of networks continued to grow and now includes many government agencies, universities, and corporations. This global WAN is called the Internet.

The Windows NT TCP/IP allows users to connect to the Internet and to any machine running TCP/IP and providing TCP/IP services.

Some of the advantages of TCP/IP protocol are that it provides the following functions.

  • Providing connectivity across operating systems and hardware platforms

  • Providing access to the Internet

  • Providing a routable protocol

  • Supporting Simple Network Management Protocol (SNMP)

  • Supporting Dynamic Host Configuration Protocol (DHCP), which provides dynamic IP-address assignments

  • Supporting Windows Internet Name Service (WINS), which provides a dynamic database of IP address-to-NetBIOS name-resolution mappings

The registry location for TCP/IP parameters is:

HKEY_LOCAL_MACHINE \System \CurrentControlSet \Services \Tcpip

NWLink (IPX/SPX)

NWLink is an IPX/SPX-compatible protocol for the Windows NT network architecture. It can be used to establish connections between computers running Windows NT Server or Windows NT Workstation and MS-DOS, OS/2, and Microsoft Windows through a variety of communication mechanisms.

NWLink is simply a protocol. By itself, it does not allow a computer running Windows NT Server or Windows NT Workstation to access files or printers shared on a NetWare server, or to act as a file or print server to a NetWare client. To access files or printers on a NetWare server, a redirector must be used, such as the Client Service for NetWare (CSNW) on Windows NT Workstation or the Gateway Service for NetWare (GSNW) on Windows NT Server.

NWLink is useful if there are NetWare client/server applications running that use Sockets or NetBIOS over the IPX/SPX protocol. The client portion can be run on a Windows NT Server or Windows NT Workstation system to access the server portion on a NetWare server, and vice versa.

NWNBLink contains Microsoft enhancements to Novell NetBIOS. The NWNBLink component is used to format NetBIOS-level requests and pass them to the NWLink component for transmission on the network.

The registry location for NWLink parameters is:

HKEY_LOCAL_MACHINE \System \CurrentControlSet \Services \NWLINK

Streams

Streams are multiple data channels that allow broader bandwidth for data transfer. There are two reasons for writing a protocol to use the Streams device driver.

  • Streams makes it easier to port existing protocols to the Windows NT operating system.

  • Streams encourages protocols to be organized in a modular, stackable style, thus moving closer to the original vision of the OSI model.

Figure 1.13: Windows NT 3.1 Streams

Figure 1.13: Windows NT 3.1 Streams

In Windows NT version 3.1, both TCP/IP and NWLink were surrounded by a Streams device driver. Calls to the TCP/IP or NWLink protocol first passed through the upper layer of the Streams device driver, and then to the NDIS device driver by way ofthe lower layer of the Streams device driver. The streams device driver exposes the TDI interface at its top and the NDIS interface at the bottom. Streams is a significant departure from the way protocols were developed for MS-DOS and OS/2.

Streams has one great disadvantage: overhead. The protocol requires more instructions to pass a request from the TDI through Streams than if Streams were not used. This is why TCP/IP and NWLink do not use Streams in Windows NT version 3.5 or later.

Distributed Processing

A powerful computer can share its processing power, executing tasks on behalf of other computers. Applications that split processing between networked computers are called distributed applications. A client/server application is a distributed application in which processing is divided between a workstation (the client) and a more powerful server. The client portion is sometimes referred to as the front end and the server portion is sometimes referred to as the back end.

The client portion of a client/server application usually consists of just the user interface to the application. It runs on the client workstation and takes a low-to- average amount of processing power. Typically, processing done by the client portion requires a large network bandwidth. For example, the client portion would handle screen graphics, mouse movements, and keystrokes.

The server portion of a client/server application often requires large amounts of data storage, computing power, and specialized hardware. It performs operations that include database lookups, updates, and mainframe data access.

The goal of distributed processing is to move the actual application processing from the client system to a server system with the power to run large applications. During execution, the client portion formats requests and sends them to the server for processing. The server executes the request.

Distributed Component Object Model

In addition to supporting component object model (COM) for interprocess communication on a local computer, Windows NT Server now supports distributed component object model (DCOM). DCOM (or Networked OLE) is a system of software objects designed to be reusable and replaceable. The objects support sets of related functions, such as sorting, random-number generation, and database searches. Each set of functions is called an interface, and each DCOM object can have multiple interfaces. When applications access an object, they receive an indirect pointer to the interface functions. From then on, the calling application doesn't need to know where the object is or how it does its job.

DCOM allows you to efficiently distribute processes across multiple computers so that the client and server components of an application can be placed in optimal locations on the network. Processing occurs transparently to the user. Thus, the user can access and share information without needing to know where the application components are located. If the client and server components of an application are located on the same computer, DCOM can be used to transfer information between processes. DCOM is platform independent and supports any 32-bit application that is DCOM-aware.

Note: Before you can use an application with DCOM, you must use DCOM Configuration to set the application's properties.

Advantages of Using DCOM

DCOM is the preferred method for developers to use in writing client/server applications for Windows NT.

With DCOM, interfaces can be added or upgraded without deleting the old ones, so applications aren't forced to upgrade each time the object changes. Functions are implemented as dynamic-link libraries, so changes in the functions, including new interfaces or the way the function works, can be made without recompiling the applications that call them.

Windows NT 4.0 supports DCOM by making the implementation of application pointers transparent to the application and the object. Only the operating system needs to know if the function called is handled in the same process or across the network. This frees the application from concerns with local or remote procedure calls. Administrators can choose to run DCOM applications on local or remote computers, and can change the configuration for efficient load balancing.

For example, suppose your company's payroll department uses an application with DCOM to print paychecks. When a payroll employee runs a DCOM-enabled client application on a desktop, the application starts a business-rules server. Then, the server application connects to a database server and retrieves employee records, such as salary information. The business-rules server then transforms the payroll information into the final output and returns it to the client to print.

Your application may support its own set of DCOM features. For more information about configuring your application to use DCOM, see your application's documentation.

DCOM builds upon remote procedure call (RPC) technology by providing a more scalable, easier-to-use mechanism for integrating distributed applications on a network. A distributed application consists of multiple processes that cooperate to accomplish a single task. Unlike other interprocess communication (IPC) mechanisms, DCOM gives you a high degree of control over security features, such as permissions and domain authentication. It can also be used to launch applications on other computers or to integrate web-browser applications that run on the ActiveX™ platform.

Microsoft Visual Basic®, Enterprise Edition customers who are currently using Remote Automation can easily migrate their existing applications to use DCOM. For more information, see your Visual Basic documentation or visit the Visual Basic web site at www.microsoft.com/vbasic.

Setting Security on DCOM Applications

The Windows NT 4.0 security model is easily extended to DCOM objects. Administrators set permissions on DCOM applications and can vary those permissions for local and remote execution.

Once a DCOM-enabled application is installed, you can use DCOM Configuration (in Control Panel) for the following purposes.

  • To disable DCOM so that it can't be used for the computer or the application.

  • To set the location of the application.

  • To set permissions on the server application by specifying which user accounts can or cannot access or start it. You can grant permissions that apply to all applications installed on the computer or to only a particular application.

  • To set the user account (or identity) that will be used to run the server application. The client application uses this account to start processes and access resources on other computers in the domain. If the server application is installed as a service, you can run the application using the built-in System account or a Windows NT Server service account that you have created.

  • To control the level of security (for example, packet encryption) for connections between applications.

The computers running the client application and the server application must both be configured for DCOM. On the computer running as a client, you must specify the location of the server application that will be accessed or started. For the computer running the server application, you must specify the user account that will have permission to access or start the application, and the user account that will be used to run the application.

Interprocess Communication Mechanisms for Distributed Processing

The connection between the client and server portions of distributed applications must allow data to flow in both directions. There are a number of ways to establish this connection. The Windows NT operating system provides seven different Interprocess Communication (IPC) mechanisms.

  • Named Pipes

  • Mailslots

  • NetBIOS

  • Windows Sockets

  • Remote Procedure Calls (RPCs)

  • Network Dynamic Data Exchange (NetDDE)

  • Server Message Blocks (SMBs)

  • Distributed Component Object Model (DCOM)

Named Pipes and Mailslots

A pipe is a portion of memory that can be used by one process to pass information to another. A pipe connects two processes so that the output of one can be used as input to the other.

Named pipes and mailslots are actually written as file system drivers, so implementation of named pipes and mailslots differs from implementation of other IPC mechanisms. There are entries in the registry for NPFS (Named Pipe File System) and MSFS (Mailslot File System). As file systems, they share common functionality, such as security, with the other file systems. Local processes can also use named pipes and mailslots. As with all of the file systems, remote access to named pipes and mailslots is accomplished through the redirector.

Named pipes provide connection-oriented messaging. Named pipes are based on OS/2 API calls, which have been ported into the Win32 base API set. Additional asynchronous support has been added to named pipes to make support of client/server applications easier.

In addition to the APIs ported from OS/2, the Windows NT operating system provides special APIs that increase security for named pipes. Using a feature called impersonation, the server can change its security identity to that of the client at the other end of the message. A server typically has more permissions to access databases on the server than the client requesting services has. When the request is delivered to the server through a named pipe, the server changes its security identity to the security identity of the client. This limits the server to only those permissions granted to the client rather than its own permissions, thus increasing the security of named pipes.

The mailslot implementation in the Windows NT operating system is a subset of the Microsoft OS/2 LAN Manager implementation. The Windows NT operating system implements only second-class mailslots, not first-class mailslots. Second-class mailslots provide connectionless messaging for broadcast messages. Delivery of the message is not guaranteed, although the delivery rate on most networks is quite high. Connectionless messaging is most useful for identifying other computers or services on a network, such as the Computer Browser service offered in the Windows NT operating system.

For a description of connectionless messaging, see "Data Transfer Modes," earlier in this chapter.

NetBIOS

NetBIOS is a standard programming interface in the personal-computing environment for developing client/server applications. NetBIOS has been used as an IPC mechanism since the introduction of the interface in the early 1980s.

A NetBIOS client/server application can communicate over various protocols:

  • NetBEUI Frame protocol (NBF).

  • NWLink NetBIOS (NWNBLink).

  • NetBIOS over TCP/IP (NetBT). NetBT provides RFC 1001/1002 NetBIOS support for the TCP/IP protocol stack.

From a programming perspective, higher-level IPC mechanisms, such as named pipes and RPC, have superior flexibility and portability.

NetBIOS uses the following components.

  • Netapi32.dll, which shares the address space of the NetBIOS user-mode application. (However, Netapi32.dll is used for more than NetBIOS requests.)

  • NetBIOS emulator, which provides the NetBIOS mapping layer between NetBIOS applications and the TDI-compliant protocols.

Figure 1.14: NetBIOS programming interface

Figure 1.14: NetBIOS programming interface

MS-DOS and NetBIOS applications are hard-coded to use a specific LANA number for communicating on the network. You can assign a LANA number to each network route. The network route consists of the protocol driver and the network adapter that will be used for NetBIOS commands sent to its assigned LANA number.

To assign a LANA number to a network route

  1. Click Start, point to Settings, and click Control Panel.

  2. Double-click Network.

  3. Click the Services tab.

  4. Click NetBIOS Interface, and then click Properties.

    The NetBIOS Configuration dialog box appears.

    xng_a20

  5. Click the number you want under Lana Number, and then click Edit.

  6. Type a new number, and click OK.

Windows Sockets

The Windows Sockets API provides a standard interface to protocols with different addressing schemes. The Sockets interface was developed at the University of California, Berkeley, in the early 1980s. The Windows Sockets API was developed to migrate the Sockets interface into the Windows and Windows NT environments. Windows Sockets was also developed to help standardize an API for all operating system platforms. Windows Sockets is supported on the following protocols.

  • TCP/IP

  • NWLink (IPX/SPX)

Figure 1.15: Windows Sockets programming interface

Figure 1.15: Windows Sockets programming interface

Windows Sockets consists of the following items.

  • Wsock32.dll, which shares the address space of the Windows Sockets user-mode application.

  • Windows Sockets emulator, which provides the Windows Sockets mapping layer between the Windows Sockets applications and the TDI-compliant protocols.

Remote Procedure Call

Much of the original work on Remote Procedure Call (RPC) was initiated at Sun Microsystems. This work has been carried forward by the Open Software Foundation (OSF) as part of their Distributed Computing Environment (DCE). The Microsoft RPC implementation is compatible with the OSF/DCE standard RPC.

It is important to note that it is compatible but not compliant. In this situation, compliance implies that you started with the OSF source code and worked forward. For a number of reasons, Microsoft developed RPC from the ground up. The RPC mechanism is completely compatible with other DCE - based RPC systems, such as the ones for HP and IBM/AIX systems, and will interoperate with them.

The Microsoft RPC mechanism is unique in that it uses the other IPC mechanisms to establish communications between the client and the server. RPC can use the following to communicate with remote systems:

  • Named pipes

  • NetBIOS

  • Windows Sockets

If the client and server portions of the application are on the same machine, local procedure calls (LPCs) can be used to transfer information between processes. This makes RPC the most flexible and portable of the IPC choices available.

RPC is based on the concepts used for creating structured programs, which can be viewed as having a "backbone" to which a series of "ribs" can be attached. The backbone is the mainstream logic of the program, which should rarely change. The ribs are the procedures that the backbone calls on to do work or perform functions. In traditional programs, these ribs were statically linked to the backbone and stored in the same executable.

Windows and OS/2 use data-link libraries (DLLs). With DLLs, the procedure code and the backbone code are in different pieces. This enables the DLL to be modified or updated without changing or redistributing the backbone portion.

RPC takes the concept one step further and places the backbone and the ribs on different computers. This raises many issues, such as data formatting, integer-byte ordering, locating which server contains the function, and determining which communication mechanism to use.

RPC is the developer's preferred method for writing client/server applications for Windows NT. The components necessary to use a remote procedure call are the following items.

  • Remote Procedure Stub (Proc Stub), which packages remote procedure calls to be sent to the server by means of the RPC run time.

  • RPC Run Time (RPC RT), which is responsible for communications between the local and remote computer, including the passing of parameters.

  • Application Stub (APP Stub), which accepts RPC requests from RPC RT, unwraps the package, and makes the appropriate call to the remote procedure.

  • Remote Procedure, which is the actual procedure that is called by the network.

Client applications are developed with a specially compiled "stub" library. The client application "thinks" it will call its own subroutines. In reality, these stubs will transfer the data and the function to the RPC RT module. This module will be responsible for finding the server that can satisfy the RPC command. Once found, the function and data will be sent to the server, where they are picked up by the RPC RT component on the server. The server piece then loads the library needed for the function, builds the appropriate data structure, and calls the function.

The function interprets the call as coming from the client application. When the function is completed, any return values will be collected, formatted, and sent back to the client through the RPC RT. When the function returns to the client application, it will have the appropriate returned data or an indication that the function failed.

Figure 1.16: How RPC calls operate

Figure 1.16: How RPC calls operate

Network Dynamic Data Exchange

Network Dynamic Data Exchange (NetDDE) is an extension of the Dynamic Data Exchange (DDE) protocol that has been in use since Windows version 2.x. NetDDE enables users to use DDE over a NetBIOS-compatible network. To understand NetDDE, you need to know something about DDE.

DDE is a protocol that allows applications to exchange data. To perform such an exchange, the two participating applications must first engage in a DDE conversation. The application that initiates the DDE conversation is the DDE client application, and the application that responds to the client request is the DDE server application.

A single application can be simultaneously engaged in multiple DDE conversations, acting as the DDE client application in some DDE conversations and as the DDE server application in others. This allows a user to set up a DDE link between applications and have one of the applications automatically update another.

Figure 1.17: NetDDE

Figure 1.17: NetDDE

NetDDE extends all of the DDE capabilities so that they can be used across the network, using the NetBIOS emulator. This enables applications on two or more workstations to dynamically share information. NetDDE is not a special form of DDE but rather a service that examines the information contained in a DDE conversation and looks for a special application name. Implementing NetDDE in this manner allows any DDE application to take advantage of NetDDE without modification.

The NetDDE service examines DDE requests, looking for the use of a special application name reserved by NetDDE, which is preceded by the name of the remote system. The reserved application name is NDDE$; therefore, NetDDE is looking for DDE requests that use an application name in the following form: \\<servername>\ndde$.

Before a user can connect to a printer or directory from a remote location, the printer or directory must be shared. Similarly, a NetDDE share must be created on a computer before an application on that computer can use NetDDE to communicate with the application on another computer. NetDDE-aware applications, such as Chat, automatically create a NetDDE share for themselves during installation. For other applications, a NetDDE share can be created with ClipBook Viewer, and data can then be exchanged through the ClipBoard. In addition, Windows NT includes the DDE Share utility (Ddeshare.exe), which can be used to set up a NetDDE share so that applications can directly exchange data.

NetDDE shares are defined in the registry. They are accessed by communicating with the Network DDE Service Data Manager (DSDM), which is the Windows NT operating system service that supports the rest of NetDDE.

Because NetDDE is simply an extension of DDE, the same APIs used to establish a DDE conversation are used to establish NetDDE conversations.

In Windows NT 3.1, the NetDDE services automatically load at system startup. In Windows NT 3.5 and later, the default startup type for NetDDE is manual, which improves startup time. The startup type for the NetDDE services can be configured through Control Panel.

Server Message Blocks

The Server Message Blocks (SMB) protocol, developed jointly by Microsoft, Intel, and IBM, defines a series of commands used to pass information between networked computers. The redirector packages network-control-block (NCB) requests meant for remote computers in a SMB structure. SMBs can be sent over the network to remote devices. The redirector also uses SMBs to make requests to the protocol stack of the local computer, such as "Create a session with the file server."

SMB uses four message types, which are listed below.

  • Session control messages, which consist of commands that start and end a redirector connection to a shared resource at the server.

  • File messages, which are used by the redirector to gain access to files at the server.

  • Printer messages, which are used by the redirector to send data to a print queue at a server and to get status information about the print queue.

  • Message messages, which allow an application to exchange messages with another workstation.

The provider DLL listens for SMB messages destined for it and removes the data portion of the SMB request so that it can be processed by a local device.

SMBs provide interoperability between different versions of the Microsoft family of networking products and other networks that use SMBs, including those on the following list.

  • MS® OS/2 LAN Manager

  • Microsoft Windows for Workgroups

  • IBM LAN Server

  • MS-DOS LAN Manager

  • DEC PATHWORKS

  • Microsoft LAN Manager for UNIX

  • 3Com 3+Open

  • MS-Net

Network Resource Access

Applications reside above the redirector and server services in user mode. Like all other layers in the Windows NT networking architecture, there is a unified interface for accessing network resources, which is independent of any redirectors installed on the system. Access to resources is provided through one of two components: the Multiple Universal Naming Convention Provider (MUP) and the Multi-Provider Router (MPR).

Multiple Universal Naming Convention Provider

When applications make I/O calls containing Universal Naming Code (UNC) names, these requests are passed to the Multiple Universal Naming Convention Provider (MUP). MUP selects the appropriate UNC provider (redirector) to handle the I/O request.

Universal Naming Convention Names

UNC is a naming convention for describing network servers and the share points on those servers. UNC names start with two backslashes followed by the server name. All other fields in the name are separated by a single backslash. A typical UNC name would appear as: \\server\share\subdirectory\filename.

Not all of the components of the UNC name need to be present with each command; only the share component is required. For example, the command dir*\\servername\sharename* can be used to obtain a directory listing of the root of the specified share.

Why MUP?

One of the design goals of the Windows NT networking environment is to provide a platform upon which others can build. MUP is a vital part of allowing multiple redirectors to coexist in the computer. MUP frees applications from maintaining their own UNC-provider listings.

How MUP Works

MUP is actually a driver, unlike the TDI interface, which merely defines the way a component on one layer communicates with a component on another layer. MUP also has defined paths to UNC providers (redirectors).

I/O requests from applications that contain UNC names are received by the I/O Manager, which in turn passes the requests to MUP. If MUP has not seen the UNC name during the previous 15 minutes, MUP will send the name to each of the UNC providers registered with it. MUP is a prerequisite of the Workstation service.

Figure 1.18: Multiple Universal Naming Convention Provider

Figure 1.18: Multiple Universal Naming Convention Provider

When a request containing a UNC name is received by MUP, it checks with each redirector to find out which one can process the request. MUP looks for the redirector with the highest registered-priority response that claims it can establish a connection to the UNC. This connection remains as long as there is activity. If there has been no request for 15 minutes on the UNC name, then MUP once again negotiates to find an appropriate redirector.

Multi-provider Router

Not all programs use UNC names in their I/O requests. Some applications use WNet APIs, which are the Win32 network APIs. The Multi-Provider Router (MPR) was created to support these applications.

MPR is similar to MUP. MPR receives WNet commands, determines the appropriate redirector, and passes the command to that redirector. Because different network vendors use different interfaces for communicating with their redirector, there is a series of provider DLLs between MPR and the redirectors. The provider DLLs expose a standard interface so that MPR can communicate with them. The DLLs "know" how to take the request from MPR and communicate it to their corresponding redirector.

Cc767152.xng_a26(en-us,TechNet.10).gif

Figure 1.19: Multi-provider Router

The provider DLLs are supplied by the network-redirector vendor and should automatically be installed when the redirector is installed.

The Workstation Service

All user-mode requests go through the Workstation service. This service consists of two components.

  • The user-mode interface, which resides in Lmsvcs.exe in Windows NT 3.1 and Services.exe in Windows NT 3.5 and later.

  • The redirector (Rdr.sys), which is a file-system driver that interacts with the lower-level network drivers by means of the TDI interface.

The Workstation service receives the user request, and passes it to the kernel-mode redirector.

Figure 1.20: Workstation Service

Figure 1.20: Workstation Service

Windows NT Redirector

The redirector (RDR) is a component that resides above TDI and through which one computer gains access to another computer. The Windows NT operating system redirector allows connection to Windows for Workgroups, LAN Manager, LAN Server, and other MS-Net-based servers. The redirector communicates to the protocols by means of the TDI interface.

The redirector is implemented as a Windows NT file system driver. Implementing a redirector as a file system has several benefits, which are listed below.

  • It allows applications to call a single API (the Windows NT I/O API) to access files on local and remote computers. From the I/O Manager perspective, there is no difference between accessing files stored on a remote computer on the network and accessing those stored locally on a hard disk.

  • It runs in kernel mode and can directly call other drivers and other kernel-mode components, such as Cache Manager. This improves the performance of the redirector.

  • It can be dynamically loaded and unloaded, like any other file-system driver.

  • It can easily coexist with other redirectors.

Cc767152.xng_a28(en-us,TechNet.10).gif

Figure 1.21: Windows NT redirector

Interoperating with Other Networks

Besides allowing connections to LAN Manager, LAN Server, and MS-Net servers, the Windows NT redirector can coexist with redirectors for other networks, such as Novell NetWare and Banyan VINES.

While Windows NT includes integrated networking, its open design provides transparent access to other networks. For example, a computer running Windows NT Server can concurrently access files stored on Windows NT and NetWare servers.

Providers and the Provider-interface Layer

For each additional type of network, such as NetWare or VINES, you must install a component called a provider. The provider is the component that allows a computer running Windows NT Server or Windows NT Workstation to communicate with the network. The Windows NT operating system includes two providers; Client Services for NetWare and Gateway Services for NetWare.

Client Services for NetWare is included with Windows NT Workstation and allows a computer running Windows NT Workstation to connect as a client to the NetWare network. The Gateway service, included with Windows NT Server, allows a computer running Windows NT Server to connect as a client to the NetWare network. Other provider DLLs are supplied by the appropriate network vendors.

Accessing a Remote File

When a process on a Windows NT computer tries to open a file that resides on a remote computer, the following steps occur.

First, the process calls the I/O Manager to request that the file be opened.

Then, the I/O Manager recognizes that the request is for a file on a remote computer, and passes the request to the redirector file-system driver.

Finally, the redirector passes the request to lower-level network drivers, which transmit it to the remote server for processing.

Workstation Service Dependencies

Configuration requirements for loading the Workstation service include:

  • A protocol that exposes the TDI interface must be started.

  • The MUP driver must be started.

The Server Service

Windows NT includes a second component, called the Server service. Like the redirector, the Server service sits above TDI, is implemented as a file system driver, and directly interacts with various other file-system drivers to satisfy I/O requests, such as reading or writing to a file.

The Server service supplies the connections requested by client-side redirectors and provides them with access to the resources they request.

When the Server service receives a request from a remote computer asking to read a file that resides on the local hard drive, the following steps occur.

  • The low-level network drivers receive the request and pass it to the server driver (SRV).

  • The Server service passes a read-file request to the appropriate local file-system driver.

  • The local file-system driver calls lower-level, disk-device drivers to access the file.

  • The data is passed back to the local file-system driver.

  • The local file-system driver passes the data back to the Server service.

  • The Server service passes the data to the lower-level network drivers for transmission back to the client computer.

Cc767152.xng_a29(en-us,TechNet.10).gif

Figure 1.22: Server Service

Like the Workstation service, the Server service is composed of two parts.

  • Server, a service that runs in the Services.exe, which is the Service Control Manager, where all services start. Unlike the Workstation service, the Server service is not dependent on the MUP service because the server is not a UNC provider. It does not attempt to connect to other computers, but other computers connect to it.

  • Srv.sys, a file system driver that handles the interaction with the lower levels and directly interacts with various file system devices to satisfy command requests, such as file read and write.

Binding Options

Earlier in this chapter, we discussed how the Windows NT network architecture consists of a series of layers and how components in each layer perform specific functions for the layers above and below it. The bottom of the network architecture ends at the network adapter card, which moves information between computers that are part of the network.

Figure 1.23: Network protocol bindings

Figure 1.23: Network protocol bindings

The linking of network components on different levels to enable communication between those components is called binding. A network component can be bound to one or more network components above or below it. The services that each component provides can be shared by all the components bound to it.

When adding network software, Windows NT automatically binds all dependent components accordingly.

In the Network window, the Bindings tab displays the bindings of the installed network components, from the upper-layer services and protocols to the lowest layer of network adapter drivers. Bindings can be enabled and disabled, based on the use of the network components installed on the system.

Cc767152.xng_a31(en-us,TechNet.10).gif

Figure 1.24: The Settings tab of the Network window

Bindings can be ordered or sequenced to optimize the system's use of the network. For example, if NetBEUI and TCP/IP are installed on a computer, and if most of servers that the computer connects to are running only TCP/IP, the Workstation bindings should be examined. The administrator of this computer would want to make sure that the Workstation is bound to TCP/IP first and that NetBEUI is at the bottom of the list.

In Windows NT 3.1, the redirector uses the following method to establish a connection.

  • First, the redirector submits the connect request to the first protocol driver in the bindings order and waits for that protocol driver to complete the request and return.

  • If the first protocol driver's return indicates that it could not connect to the specified server, then the redirector submits the connect request to the second protocol driver in the bindings order.

  • This continues down the bindings order until a protocol-driver return indicates that the connection was successful or until all protocol drivers have been tried and have failed.

In Windows NT 3.5 and later, the redirector uses the following method to establish a connection.

  • The redirector simultaneously submits the connect request to all protocol drivers.

  • When one of the protocol drivers successfully completes the request, the redirector waits until all higher-priority protocol drivers, if there are any, have been returned. (The priority is based on the bindings order.)

  • The redirector then proceeds to use the highest-priority protocol driver that returned with success status, and it disconnects all connections that may have been established through lower-priority protocol drivers.

Remote Access Service

The Windows NT Workstation and Windows NT Server RAS connects remote or mobile workers to corporate networks. Optimized for client/server computing, RAS is implemented primarily as a software solution, and is available on all Microsoft operating systems.

The distinction between RAS and remote control solutions (such as Cubix and pcANYWHERE) are important:

  • RAS is a software-based multi-protocol router; remote control solutions work by sharing screen, keyboard, and mouse over the remote link.

  • In a remote control solution, users share a CPU or multiple CPUs on the server. The RAS server's CPU is dedicated to communications, not to running applications.

Point-to-Point Protocol

The Windows NT operating system supports the Point-to-Point Protocol (PPP) in RAS. PPP is a set of industry-standard framing and authentication protocols. PPP negotiates configuration parameters for multiple layers of the OSI model.

PPP support in Windows NT 3.5 and later (and Windows 95) means that computers running Windows can dial into remote networks through any server that complies with the PPP standard. PPP compliance enables a Windows NT Server to receive calls from other vendors' remote-access software and to provide network access to them.

The PPP architecture also enables clients to load any combination of IPX, TCP/IP, and NetBEUI. Applications written to the Windows Sockets, NetBIOS, or IPX interfaces can now be run on a remote computer running Windows NT Workstation. The following figure illustrates the PPP architecture of RAS.

Cc767152.xng_a32(en-us,TechNet.10).gif

Figure 1.25: Point-to-Point Protocol

RAS Connection Sequence

The RAS connection sequence is key to understanding the PPP protocol. Upon connecting to a remote computer, the PPP negotiation begins.

  • First, framing rules are established between the remote computer and server. This allows continued communication (frame transfer) to occur.

  • Next, the RAS server uses the PPP authentication protocols (PAP, CHAP, SPAP) to authenticate the remote user. The protocols invoked depend on the security configurations of the remote client and server.

  • Once authenticated, the Network Control Protocols (NCPs) are used to enable and configure the server for the LAN protocol that will be used on the remote client.

When the PPP connection sequence is successfully completed, the remote client and RAS server can begin to transfer data using any supported protocol, such as Windows Sockets, RPC, or NetBIOS. The following illustration shows where the PPP protocol is on the OSI model.

Cc767152.xng_a33(en-us,TechNet.10).gif

Figure 1.26: PPP within the OI model

If a remote client is configured to use the NetBIOS Gateway or Serial Line Internet Protocol (SLIP), this sequence is invalid.

Point-to-Point Tunneling Protocol

A RAS server is usually connected to a PSTN, ISDN, or X.25 network, allowing remote users to access a server through these networks. RAS now allows remote users access through the Internet by using the new Point-to-Point Tunneling Protocol (PPTP).

PPTP is a new networking technology that supports multiprotocol virtual private networks (VPNs), enabling remote users to access corporate networks securely across the Internet by dialing into an Internet Service Provider (ISP) or by connecting directly to the Internet. For more information, see the Microsoft Windows NT Server Networking Supplement, Chapter 11, "Point-to-Point Tunneling Protocol."

NetBIOS Gateway

Windows NT continues to support NetBIOS gateways, the architecture used in previous versions of the Windows NT operating system and LAN Manager. Remote users connect using NetBEUI, and the RAS server translates packets to IPX or TCP/IP, if necessary. This enables users to share network resources in a multiprotocol LAN, but prevents them from running applications that rely on IPX or TCP/IP on the client. The NetBIOS gateway is used by default when remote clients use NetBEUI. The following illustration shows the NetBIOS gateway architecture of RAS.

Cc767152.xng_a34(en-us,TechNet.10).gif

Figure 1.27: NetBIOS gateway architecture of RAS

An example of the NetBIOS gateway capability is remote network access for Lotus Notes users. While Lotus Notes does offer dial-up connectivity, dial-up is limited to the Notes application. RAS complements this connectivity by providing a low-cost, high-performance, remote-network connection for Notes users, which connects Notes and offers file and print services with access to other network resources.

Serial Line Internet Protocol

SLIP is an older communications standard found in UNIX environments. SLIP does not provide the automatic negotiation of network configuration and encrypted authentication that PPP can provide. SLIP requires user intervention. Windows NT RAS can be configured as a SLIP client, enabling users to dial into an existing SLIP server. RAS does not provide a SLIP server in Windows NT Server.

For more information about RAS issues, see the Rasphone.hlp online Help file on the Windows NT distribution disks (or, if RAS has been installed, see systemroot\System32).

Services for Macintosh

Through Windows NT Services for Macintosh, Macintosh users can connect to a Windows NT server in the same way they would connect to an AppleShare server. Windows NT Services for Macintosh will support an unlimited number of simultaneous AFP connections to a Windows NT server, and Macintosh sessions will be integrated with Windows NT sessions. The per-session memory overhead is approximately 15K.

Existing versions of LAN Manager Services for the Macintosh can be easily upgraded to Windows NT Services for Macintosh. OS/2-based volumes that already exist are converted with permissions intact. Graphical installation, administration, and configuration utilities are integrated with existing Windows NT administration tools. Windows NT Services for Macintosh requires System 6.0.7 or higher and is AFP 2.1-compliant; however, AFP 2.0 clients are also supported. AFP 2.1 compliance provides support for logon messages and server messages.

Support for Macintosh networking is built into the core operating system for Windows NT Server. Windows NT Services for Macintosh includes a full AFP 2.0 file server. All Macintosh file system attributes, such as resource data forks and 32-bit directory IDs, are supported. As a file server, all filenames, icons, and access permissions are intelligently managed for different networks. For example, a Word for Windows file will appear on the Macintosh with the correct Word for Macintosh icons. These applications can also be launched from the File Server as Macintosh applications. When files are deleted, no orphaned resource forks will be left to be cleaned up.

Windows NT Services for Macintosh fully supports and complies with Windows NT security. It presents the AFP security model to Macintosh users and allows them to access files on volumes that reside on CD-ROM or other read-only media. The AFP server also supports both cleartext and encrypted passwords at logon time. The administrator has the option to configure the server not to accept cleartext passwords.

Services for Macintosh can be administered from Control Panel and can be started transparently if the administrator has configured the server to use this facility.

Macintosh-accessible volumes can be created in My Computer. Services for Macintosh automatically creates a Public Files volume at installation time. Windows NT file and directory permissions are automatically translated into corresponding Macintosh permissions.

Windows NT Services for Macintosh has the same functionality as the LAN Manager Services for Macintosh 1.0 MacPrint. Administration and configuration are also easier. There is a user interface for publishing a print queue on AppleTalk and a user interface for choosing an AppleTalk printer as a destination device. The Windows NT print subsystem handles AppleTalk despooling errors, and uses the built-in printer support in Windows NT. (The PPD file scheme of Macintosh Services 1.0 is not used.) Services for Macintosh also has a PostScript-compatible engine that allows Macintosh computers to print to any Windows NT printer as if they were printing to a LaserWriter.