Deploying ATM with MS Windows NT and MS Windows

Archived content. No warranty is made as to technical accuracy. Content may contain URLs that were valid when originally published, but now link to sites or pages that no longer exist.

Abstract

Asynchronous Transfer Mode (ATM) technology is emerging as an important worldwide standard for the transmission of information. Rapidly being deployed by telephone companies and enterprise customers, ATM represents for many the next generation for LAN switching. Its ability to accommodate the simultaneous transmission of data, voice, and video is enabling a spectrum of new applications, including those supporting real-time voice and video. To facilitate the transition to ATM networks, Microsoft has launched the industry's first testing and logo certification program for LAN Emulation solutions for Microsoft® Windows NT® Server 4.0 (and soon for Windows 95).

A LAN Emulation module and call manager, designed to Network Driver Interface Specification (NDIS) 3.0 or 4.0, allows an ATM card to function smoothly in a heterogeneous ATM/Ethernet environment, allowing network administrators to incrementally upgrade their systems to ATM.

Microsoft has also outlined plans to support ATM natively, something that will allow developers and network administrators to take advantage of ATM's unique quality-of-service transmission guarantees, while freeing them from having to write or license proprietary LAN emulation modules or call managers. This native ATM support is planned for the next major release of the Windows NT operating system. These efforts should make it easier and more cost effective for organizations to upgrade to the benefits of ATM technology.

On This Page

Introduction
LAN Emulation Testing and Certification
Taking a Closer Look at ATM
Exploring the Types of ATM Services
Getting Started with Lan Emulation
Supporting ATM Natively
Summary
Appendix A: An ATM Primer

Introduction

Microsoft is taking a leadership role in the computer industry's movement toward Asynchronous Transfer Mode (ATM) technology, something which already is being deployed rapidly throughout the telecommunications industry. ATM is expected to elevate computer networking to a next-generation level that will enable a spectrum of new applications, including greatly enhanced multimedia, videoconferencing, and movie-on-demand type video streaming, as well as telephony and other real-time voice services.

ATM accommodates the simultaneous transmission of data, voice, and video, allowing for the unification of today's separate networks that were created for specific functions (computers for data; telephones for voice). And ATM easily coexists in mixed network environments. ATM has been developed to ensure smooth integration with numerous existing network technologies at several levels including Frame Relay, Ethernet, and TCP/IP. This is good news for net administrators, because it means that networks can be incrementally upgraded to ATM.

Microsoft believes that the benefits of ATM technology are significant enough for LAN/WAN communications that it has launched the industry's first logo and compatibility testing program for ATM LAN Emulation solutions for Microsoft® Windows NT® Server 4.0.

The testing and certification will allow vendors to offer their ATM solutions with confidence and will provide assurance for end users upgrading to ATM technology. An ATM call manager and a LAN emulation module allow ATM cards to function in an Ethernet environment, with the ATM card appearing to overlying applications and protocols as if it were Ethernet. The LAN emulation module and call manager work together to accept Ethernet headers and addresses and translate them for ATM. LAN Emulation for token ring systems is being developed.

Microsoft has also outlined its architectural direction for supporting ATM natively in the operating system, something that will be examined later in this paper.

ATM represents a significant advance in network communications and is expected to have a long life. There is nothing faster on the horizon or better geared to support quality-of-service (QoS) based applications, and ATM has been designed from the outset to have the flexibility and scalability that will enable it to match future advances in computer and telecommunications technology.

Microsoft's support of LAN emulation and its plans for supporting ATM natively are part of the company's commitment to making Windows® the communications platform of choice. No other company offers such a rich set of network communications and telephony support built-in to its operating system products.

LAN Emulation Testing and Certification

To encourage third-party development of ATM interface products and to enable customers to purchase ATM solutions with more confidence, Microsoft has launched the first ATM LAN Emulation testing and logo certification program. Initially emulation is being tested for Ethernet LANs, with emulation for token ring LANs to be added soon.

The Ethernet LAN Emulation testing and logo certification program was launched with the release of Windows NT 4.0 and will soon be available for Windows® 95. ATM interface products, including Madge Network Systems, Adaptec, and others, have already passed this logo testing process and are now identified in the Windows Hardware Compatibility List (HCL) accessible from the Microsoft Web site. In addition, the drivers for these vendors' ATM cards are accessible and downloadable from the HCL. In the future, these ATM drivers, like other drivers which adhere to the relevant HCL requirements, will be included in the box with the operating system. (The drivers for Madge Network Systems and Adaptec are included in the box for Windows NT Server 4.0.) LAN Emulation supports ATM speeds of 25 Mb/s and OC3 (155 Mb/s).

The logo testing process is rigorous, involving stress tests at both the user-mode and kernel-mode levels. The ATM card vendor chooses the processor type or types (Intel, Alpha MIPS, etc.) for which its devices will be tested in the logo program. Tests are performed with single and multiple instances of the adapter installed in the test machine.

The LAN Emulation Testing Process

The testing process has three basic parts. The first involves two ATM cards in the same PC. One card sends traffic out across the LAN to an ATM switch and back into the second card. The second test involves two ATM cards, each in a separate PC. Data is sent from one ATM card, across the LAN to an ATM switch, and to the ATM card on the other PC. The third test involves one PC containing an ATM card and another PC containing an Ethernet card. Data is sent from the ATM card, through an ATM switch, to the Ethernet card. Different aspects of the LAN Emulation protocol are testing with these three scenarios. These tests are conducted on the particular processor type for which the customer seeks certification, such as Intel or MIPS. This testing process takes only a few days, and the results are completely confidential—although card vendors are free to publish their card's results.

The testing is done over twisted pair as well as fiber, at both 25 Mb/s and 155 Mb/s. The server side of the ATM LAN Emulation services test uses the UNI 3.1 call manager. To assure interoperability, each card is tested with a combination of cards from different vendors. Madge Network Systems and Adaptec were certified for ATM LAN Emulation in time for the drivers to be in the box with Windows NT 4.0. Other ATM LAN Emulation drivers which have passed the logo testing process, such as the FORE Systems driver, are available through the Microsoft hardware compatibility page at www.microsoft.com/hwtest.

While Ethernet has been tested already, the same test configuration process will be provided for token ring when there are enough cards available and the other testing criteria are met.

Benefiting Customers

The logo testing and certification program means that customers can buy ATM card products which have earned the Windows compatible logo with confidence that the ATM cards will perform reliably with a broad set of applications.

Benefiting Vendors

ATM card vendors can make use of the logo testing process to ensure a higher degree of customer satisfaction with their products. The logo testing program provides a standard measure of compatibility and performance for ATM card vendors, a standard which has, to date, been lacking.

Vendors will benefit even more when Microsoft releases the NDIS extensions for native ATM support and begins offering native ATM drivers. Native ATM support at the operating system level will enable a new class of guaranteed quality-of-service (QoS) based applications. Because native support will mean that the call manager and LAN Emulation will be provided in the operating system (for both Windows NT and Windows 95), developers will be free of the need to develop or license their own call managers and LAN Emulation modules. This support should allow ATM hardware to be more easily and more quickly developed and lower the costs to develop and deploy ATM-based solutions.

Taking a Closer Look at ATM

ATM is emerging as an important worldwide standard for the transmission of information because of its ability to efficiently carry all traffic types, including the most time-sensitive, such as real-time voice and video. (For a more detailed description of ATM, please see Appendix A, "An ATM Primer.")

ATM is compatible with existing physical networks such as twisted pair, coax and fiber optics, because it isn't design-limited to a specific type of physical transport. This also means that ATM, unlike conventional LANs, has no inherent speed limit. In contrast, when Ethernet speed was increased from ten to 100 megabits per second, its architecture required a reduction in the length of Ethernet segments from 2,500 meters to 250 meters. Similarly, Token Ring has gating factors on its speed. But with ATM, there's nothing in the architecture that limits speed. An ATM network can operate as fast as a physical layer can be made to run.

Even more importantly, a homogenous ATM LAN can connect through an ATM WAN to another ATM LAN, providing end-to-end quality-of-service guarantees beyond that offered by any other transport system.

As noted earlier, ATM switching technology is so efficient that it already is being widely and rapidly adapted by the telecommunications industry, providing the WAN infrastructure that will allow ATM LANs to enable applications that would be either impossible or impractical with conventional LAN technology.

While 100-megabit Ethernet and other high speed networks can provide comparable bandwidth, only ATM can provide the QoS guarantees required for confidently deploying real-time telephony, video streaming, smooth videoconferencing, and other no-delay voice and video applications. The need for QoS is so vital to the industry that a number of initiatives are underway to provide QoS support for TCP/IP based networks, including the Resource Reservation Protocol (RSVP), a specification proposed by the Internet Engineering Task Force (IETF), allowing software developers to "reserve" bandwidth on the Internet to deliver real-time video and audio data streams. With the widespread adoption of IP as a protocol for the Internet and intranets, these QoS efforts are relevant, and Microsoft will support these other standards. But ATM is built from the ground up to support QoS.

Moving from Connectionless to Connection Oriented

To better appreciate connection-oriented ATM, it is helpful to review connectionless systems. LAN architecture, whether Ethernet, token ring, or FDDI, share certain characteristics. Each station is connected to the network via an adapter card, which has a driver, above which is a protocol driver, such as TCP/IP. In traditional LANs, such as Ethernet, the driver protocol is connectionless, meaning that the protocol driver simply provides a packet with a source address and a destination address and sends it on its way. Being joined by a common medium, each station will see the packets of data put on the wire by each of the others, regardless of whether the packet is passed sequentially, as in a ring topology, or broadcast, as with Ethernet. The primitive from the station to the wire, or from the protocol to the adapter, is simply "send packet."

Once the packet has been sent, according to the specifications of whatever LAN is being used, the adapter knows that the packet is visible to all stations on the network. Each station has an adapter card, which processes the packet and examines the destination address. If the address applies to that machine, the adapter does a hardware interrupt and accepts the packet. If not, the adapter parses it. Again, this is called connectionless because no logical connection to the recipient address was made, the packets were simply addressed and put onto the network.

A LAN network such as Ethernet offers very few services, because all an Ethernet card can do is take a packet and send it. Being connectionless, it can provide no guarantees or similar features. For example, it can't determine the status of the target machine. This is why developers rarely write applications directly to Ethernet. Rather, protocol drivers are used to enter sequence numbers, verify packet arrival (retransmitting, if necessary), partitioning big messages into smaller ones, and such—with all of these services adding time to the transmission, and with none of them able to provide end-to-end quality of service guarantees.

Making a Virtual Circuit Connection with ATM

Because a homogenous ATM network is connection-oriented, instead of just addressing a packet and sending it, a station requests that a call be made and provides the ATM address of the target machine or adapter, along with service access point (SAP) information, which identifies a particular application running on the target machine. So a connection is made from a particular application on one machine to a specified application on another machine, through the ATM cards on both machines.

This virtual circuit (VC) allows ATM to provide services such as QoS, because the VC allows applications to specify the guarantees they want. When these guarantees are made, the local adapter card, any switches in the path, whether local or wide area, and finally the adapter and application on the other machine agree to provide that QoS, such as a minimum bandwidth.

Providing QoS

ATM provides quality of service guarantees by establishing an end-to-end virtual connection in the network prior to transferring information. A contract must be negotiated between users and the ATM network layer to define the service between two or more nodes and specify connection parameters. The first part of the contract is the traffic descriptor, which characterizes the load to be offered, and the second specifies the QoS desired by the user and accepted by the network operator or carrier. To provide traffic contracts, ATM employs a number of QoS parameters, with a worst case performance specified for each, which the network operator must meet or exceed.

Calling First to Guarantee QoS

ATM allows applications to bypass the TCP/IP layer and talk directly to the hardware. Once a connection is made, connection-oriented media allow the status of the destination to be determined, and a virtual circuit to be established directly from one application to another. ATM takes large chunks of data, segments them into uniform cells for transmission, and reassembles them on the receiving end. Because ATM is connection-oriented, everything will arrive in sequence.

When a target application receives a MakeCall request, it examines the QoS to determine if it can meet the requested specifications—two megabits per second, for example. If the requested bandwidth is available through the network, the request will be granted, establishing a virtual circuit with a QoS guarantee which all of the intervening ATM hardware will honor. This guarantee eliminates the need for ATM to provide acknowledgment. In effect, ATM cards offer essentially the same set of services as a conventional transport driver does above Ethernet. Applications can talk directly to ATM, or to a protocol driver sending packets over ATM.

This virtual circuit capability can function, regardless of distance, over private ATM networks or public carrier ATM switches, providing ATM services with a global reach.

Gaining Speed through Fixed Length Cells

ATM gains speed and efficiency through the use of uniform cells. Each cell has a 5-byte header and a 48-byte payload. This fixed length eliminates the need to waste CPU cycles looking for the end of a packet. The header is just five bytes, so it doesn't have to provide size, delimiter, or similar information. This switching is so efficient that within a homogenous ATM environment there's no need for routers.

In contrast, in a routed environment, a large data packet might arrive just before a small voice segment. The large data packet may not be time-critical, but the system will process the large packet before it can get to that time-sensitive voice packet. This problem doesn't exist on an ATM network.

Switching Made Simple with Virtual Circuits

When a virtual connection is made, each packet is provided with a VC identifier (VCI), which is carried in the header and provided to a switch mapping table. The table handles the simple crossover process of mapping incoming VC cells from the incoming port to the assigned outgoing port, and then moving to the next incoming cell.

During transmission, the switch looks at the header information and at its mapping table, and it switches the VC to the appropriate port. The cells arrive at the destination ATM card, which also knows the VC number, reassembles the cells into the protocol data unit, and sends it out to whatever application or driver that VC number was linked to as the SAP during initial connection.

Because the VC path is used all the way through, there is nothing new about transferring data over a WAN which also has ATM switches. This is very different from using a router, as is done with conventional WANs.

Seamlessly Combining Datacom and Telecom

Before ATM, the datacom and telecom worlds were separate and had different protocols. Whenever they met, such as at the router, significant and time-consuming translations were required. ATM brings the two worlds together. LAN ATM cards and switches connect to WAN ATM switches. This better enables voice to be done from a workstation, either through a separate physical line to a switch or using a phone as a peripheral to the PC, through an add-in card, to the switch and onto the network. With ATM switches from one end to the other, QoS can be absolutely guaranteed all the way through.

Choosing the Level of QoS

ATM provides great flexibility for applications to set quality-of-service requirements. Video conferencing might require a high QoS, while data may allow a lower setting. A very granular set of parameters exists by which to specify what QoS is required. An application can specify the amount of bandwidth, the maximum delay, and the maximum cell-loss rate. There are about twenty different parameters, all having to do with time guarantees. Some of the most important QoS parameters are listed below.

Peak cell rate (PCR)

The maximum rate (in cells per second) at which the sender wants to send the cells.

Sustained cell rate (SCR)

The expected or required cell rate averaged over a long interval.

Minimum cell rate (MCR)

The minimum number of cells per second that the user considers acceptable. If the carrier cannot guarantee this much bandwidth, it must reject the connection.

Cell Delay Variation (CDV or Jitter)

How uniformly the ATM cells are delivered. ATM layer functions may alter the traffic characteristics by introducing cell delay variation. When cells from two or more ATM connections are multiplexed (MUX), cells of a given connection may be delayed when cells of another connection are being inserted at the output of the multiplexer. (Multiplexing at the transport layer refers to placing several transport connections onto one virtual circuit).

Cell Delay Variation Tolerance (CDVT)

The amount of variation present in cell transmission times. CDVT is specified independently for peak cell rate and sustained cell rate. For a perfect source operating at PCR, every cell will appear exactly 1/PCR after the previous cell. However, for a real source operating at PCR, some variation in cell transmission time will occur. CDVT controls the amount of variation acceptable using a leaky bucket algorithm, described later in this section.

Cell Loss Ratio (CLR)

The fraction of the transmitted cells that are not delivered or are delivered so late as to be useless (for example, for real-time traffic).

Cell Transfer Delay (CTD)

The average transit time from source to destination.

Cell Error Ratio (CER)

The fraction of cells that are delivered with one or more bits wrong.

Severely Errored Cell Block Ratio (SECBR)

The fraction of N-cell blocks of which M or more cells contain an error.

Cell Misinsertion Rate (CMR)

The number of cells per second that are delivered to the wrong destination because of an undetected error in the cell header.

The QoS of an ATM connection refers to the cell loss, the delay, and the delay variation incurred by the cells belonging to that connection. QoS of an ATM connection is closely linked to the bandwidth it uses. For example, using more bandwidth increases the cell loss, the delay, and the delay variation incurred, therefore decreasing the QoS for cells of all connections that share those resources.

Going Beyond RSVP

As noted earlier in this paper, Microsoft will also support the Resource Reservation Protocol (RSVP) for IP, allowing software developers to "reserve" bandwidth on the Internet.

RSVP, if completely implemented, is intended to provide QoS over any media, even if the media itself provides none. But RSVP allows only a much less granular, more generic QoS guarantee. The present definition of RSVP includes a number of stations, all connected to a switch that handles local traffic, which in turn is connected to a router, which provides WAN access. The RSVP definition is concerned primarily with the router, which means that when one router wants to talk to another, RSVP can request a certain quantity of bandwidth. But the Internet allows connections over various types of routers, most of which don't support RSVP.

Additionally, RSVP doesn't yet apply to the station or the switch, meaning that an Ethernet card doesn't know anything about quality of service, or limiting packet release to assure that it doesn't go over its allocation. These things have to be done in software via packet schedulers. There is no plan or infrastructure for putting RSVP on the switch. With ATM, every component in the line is ATM-based, so it can provide absolute guarantees.

RSVP also doesn't provide a mechanism for tracking and billing for quality of service, which is a major concern for carriers. Tracking and billing are easily done using ATM.

As noted, RSVP does not address all the needs of QoS, but RSVP and related protocols are significant. Microsoft will continue to support efforts related to enabling real-time multimedia applications to run on IP or other types of networks. For many customers, this measure of QoS will be sufficient. For other customers, ATM will be required.

Exploring the Types of ATM Services

ATM provides multiple types of QoS service, including:

  • Constant Bit Rate \par

  • Variable Bit Rate \par

  • Available Bit Rate \par

  • Unspecified Bit Rate \par

Here's a look at what each type of service offers.

Constant Bit Rate

Constant Bit Rate (CBR) guarantees that the entire network, including each intervening switch from origin to destination, will provide an agreed-upon quantity of bandwidth at all times. From the standpoint of a carrier, CBR is expensive to provide, because the guaranteed bandwidth must be reserved for that customer even if the customer isn't using it. The carrier can't let another customer use that bandwidth in the switch, because the purpose of CBR is to provide an absolute guarantee, which also assures that no cells will be lost.

A quality-of-service contract, like any other contract, has two sides. An application using CBR will ask for a certain quantity of bandwidth, such as two megabits per second. The adapter, in turn, has to guarantee that it won't send more than the agreed upon amount of data. If it exceeds that limit, the switch may dump the extra bits. Making sure that no cells are lost depends on the adapter fulfilling its side of the contract. Lossless transmission is possible, but only if the adapter is careful. If not, the switch can simply dump packets that exceed the terms of the contract. For this reason, the adapter on the PC must do traffic shaping, or flow control, so that nothing is lost.

A good example of CBR in action would be a situation with 200 incoming voice lines, with 200 simultaneous vital calls, which are being mapped to 200 hardware voice lines. In the middle is an ATM network. Because there is no control over the incoming data, it must be handled in real time, which would require CBR.

Variable Bit Rate

Another form of QoS is Variable Bit Rate (VBR). It may be used in cases where an adapter would like two megabits per second, but can live with an average of only one megabit per second. It could negotiate for 2 megabits for a certain number of seconds in each minute, as a peak, with no cells lost. The switch guarantees an average, over time, of one megabit per second, with two megabits provided when available. It can also guarantee an average peak bandwidth per period, in addition to the flat average. This is less expensive that CBR, because although the adapter requested a peak of two megabits per second, the carrier has to provide an average of only one. If other traffic needs to be put on the backplane of the switch and is running out of bandwidth, it can use the second megabit, because it only guaranteed an average of one. It might even go as low as 512K per second, as long as it can keep the guaranteed average.

VBR and Silence Suppression

VBR can be used efficiently for voice traffic, taking advantage of silence suppression. During pauses in a normal conversation, other data can be sent on the vacant bandwidth. For example, a video conferencing application, with the audio over a speaker, could use VBR with silence suppression. Because VBR gives an average bandwidth, software and algorithms can be sent when the peak bandwidth is available. This works when the average is adequate to transfer the voice traffic in compressed, silence-suppressed form. When less flexibility exists, CBR may be necessary, but it usually isn't required for video on demand, video conferencing, and such. Those can typically be done with VBR.

Available Bit Rate

Available Bit Rate (ABR) is the newest and least expensive of the three already covered. ABR is the most complex QoS, but is the cheapest service to buy from a carrier. ABR employs a feedback loop between the adapter and switch. ABR lets an adapter say that it would like, for example, two megabits, but will take whatever it can get. The switch says that it can provide two megabits initially, but the elaborate protocol in ABR allows the switch to lower that bandwidth. Packets from the switch tell the card that its bandwidth is being reduced, and the card can agree to source a reduced amount of data. Later on, the switch might reduce bandwidth again and later increase it. A constant feedback loop guarantees that each side knows the status of the other. Because the adapter always knows the available bandwidth, it doesn't overrun the limit.

ABR Resembles a LAN

ABR is very popular because it resembles a LAN: although in theory it has a certain amount of bandwidth available, in practice data is transferred at a much lower rate. Sometimes it will be fast, and sometimes it will be slow. ABR makes sense because QoS applications are rare today. Customers like the idea of QoS, but might initially still be using Ethernet, which acts like ABR. Therefore, when customers use ATM cards, their applications will often use ABR.

A major advantage of ABR is that it is inexpensive, because the switch is providing whatever bandwidth is available. Net managers can migrate to ATM, and everything they have today will still work because of ABR. And the presence of ATM means that when an application needs the enhanced capabilities of ATM, the speed of ATM will be available, providing migrating users with the best of both worlds.

Unspecified Bit Rate

Unspecified Bit Rate (UBR) has no bandwidth guarantee. The other forms of QoS all require the adapter to know that if it doesn't exceed a certain limit, the switch will not drop packets. UBR provides no contract whatsoever. An adapter just gets whatever bandwidth is available at the time. Cells being sent out onto the net may all be dropped, or they could all be sent. UBR is of course the cheapest, because the carrier makes absolutely no guarantees at all. It's similar to being on standby. UBR is useful, because it's like UDP (User Datagram Protocol) today on TCP/IP. UDP provides no guarantee. There may be no bandwidth available, and no way to determine if a packet reaches its destination. A good example of UBR is a Dow-Jones quote feed coming to a window on a group of PCs. If someone misses a stock quote when it scrolls by, they'll see it on the next pass, and it's not important to see it immediately.

Getting Started with Lan Emulation

As was noted earlier, ATM resides comfortably on a heterogeneous network, which is fortunate because it allows network managers to incrementally build toward a homogenous ATM environment. A key part of transitioning toward a full ATM network implementation is Microsoft's testing and logo certification program for third-party LAN Emulation modules, which allow ATM cards to function like Ethernet cards before ATM is fully implemented on the network.

During this period of transition, the power of QoS won't be available, but the LAN Emulation Module provides the substantial benefit of accommodating legacy transport drivers as an organization moves toward the upgrade to a homogenous ATM network.

Resolving the Connectionless Ethernet World

The LAN Emulation Module allows an ATM card to function in an Ethernet environment by making the ATM card appear to overlying Applications and protocols as if it were Ethernet. Emulation exposes Ethernet at the top, while allowing the card to function as ATM below. LAN Emulation works like a translator, working with the ATM card and the call manager to accept Ethernet headers and addresses, resolving the connectionless Ethernet world to the virtual-circuit requests of the ATM connection-oriented world.

The LAN Emulation module has both a client and a server component. Upon startup, all LAN Emulation clients on a network report to the LAN Emulation server, to get registered, and have their ATM addresses mapped to assigned Ethernet addresses.

At run time, the transport driver, such as TCP/IP, sends an Ethernet packet to a manufactured Ethernet address, which the LAN Emulation client maps to an ATM address to create a virtual circuit. The LAN Emulation server has a master database, or lookup table, specifying which Ethernet addresses map to which ATM addresses. An ATM address is returned to the client, which then makes a VC to that ATM address. In this way, the LAN Emulation client makes the world look like Ethernet above, while using actual ATM below.

LAN Emulation not only allows ATM networks to be used with connectionless LAN cards, but also allows communication through an ATM card, through a switch, onto a homogenous Ethernet network, a quality which is important for migration.

Token Ring Emulation

Although Microsoft's LAN Emulation testing and certification program will initially be limited to Ethernet, the company's plans include expanding the program to certify cards that allow ATM machines to talk to token ring machines over a token ring network.

Supporting ATM Natively

Looking ahead, Microsoft plans to extend the Network Driver Interface Specification (NDIS) to support ATM natively in the operating system. The NDIS 4.1 extensions will provide kernel-mode drivers with direct access to connection-oriented media such as ATM. The extensions add a connection-oriented call plane with full QoS support, so a VC can be established with QoS. It then adds a connection-oriented data plane for sending and receiving data on that VC.

The new architecture will extend native ATM support for Windows Sockets (WinSock) 2.0 through the creation of a null transport layer which maps WinSock APIs to NDIS, extending direct ATM access to user-mode applications. WinSock, designed by the WinSock working group and developed by Microsoft, Intel, and other industry leaders, is a Win32® programming API that enables development of transport-independent network applications. The ATM Forum membership in April 1996 approved that the WinSock 2 API's ATM extension is a legitimate syntax mapping to the ATM Forum API specification.

Cc749988.vt0bx(en-us,TechNet.10).gif

Figure 1: . Microsoft plans to extend the Network Driver Interface Specification (NDIS) to support ATM natively in the operating system. On the left is NDIS today. Third-party ATM solution vendors provide their own LAN Emulation Clients (LEC), UNI Call Manager, and Hardware Interface software. On the right is the Windows architecture view of native ATM support,(IETF RFC 1577) in which Microsoft will provide the LEC, UNI Call Manager and IP over ATM natively in the operating system. The WinSock 2 service provider for ATM will also enable more QoS-based applications to run over ATM with Windows.

Also included will be the User Network Interface (UNI) 3.1 call manager. UNI 3.1 is the signaling protocol standardized by the ATM Forum. NDIS 4.1 will also accommodate "pluggable" call managers for media-specific call signaling.

In addition, the new architecture calls for support of classical IP over ATM networks, as described in the IETF standard RFC 1577, which enables IP-based applications to work over ATM networks.

Microsoft has also entered into licensing agreements with FORE Systems and Olicom, providing additional resources for ATM developers and vendors. Microsoft has licensed Olicom's UNI 3.1 call manager, and FORE System's ForeThought ATM LAN Emulation Client software for integration with future versions of Microsoft's Windows operating system products. These licensed technologies provide ATM solution developers with an additional path for bringing their products to market as quickly and as inexpensively as possible.

Taking a Closer Look at the Call Manager

The call manager, or signaling module, is what enables one ATM station to establish a virtual circuit—complete with QoS guarantees—with another ATM station, all prior to the packets actually being transmitted. The call manager resides on each switch, so that each switch can respond to VC and QoS requests, which is what gives ATM its ability to guarantee bandwidth allocation.

It is the call manager that essentially says, "Establish a virtual circuit to this ATM address, and to this service access point application, with this QoS guarantee." Once the virtual circuit has been established, the signaling module is out of the picture, and the application or driver can simply tell the hardware interface to send a certain packet on a specified VC number. The hardware interface, the NIC, and the switches all know the significance of that VC number, as well as its QoS and its mapping from input port to output port.

Protocols that now run on Windows NT or Windows 95, such as TCP/IP, IPX, and NetBEUI, are unable to create virtual circuits and would require significant rewriting to understand VC technology. This is why Microsoft is working on incorporating the Call Manager into the operating system, something that will greatly simplify development for ATM card vendors. Rather than having to develop, purchase, or license a custom call manager, something which is difficult to author, they will be able to provide a simple miniport that will plug into the call manager supplied in the Windows 95 or Windows NT operating system.

Allowing Multiple Call Managers

As noted above, Microsoft plans to use the UNI 3.1 call manager. But to create a more open system, the NDIS 4.1 interface will provide the capability to have multiple call managers. This means a vendor can create a pluggable custom call manager to support a switch that uses a proprietary signaling protocol other than UNI 3.1.

Accommodating IP with ATM ARP

Because Internet Protocol (IP) is so prevalent, Microsoft will provide direct IP address resolution to ATM, doing away with the need for IP to first be translated through a LAN Emulation module.

For TCP/IP, the ATM Forum defined a special module, called "Classical IP over ATM," which is an Address Resolution Protocol (ARP) module. Defined in Internet RFC 1577, this ATM ARP will be included in future Windows 95 and Windows NT operating systems.

Microsoft's implementation will allow IP to work through either LAN Emulation or ATM ARP. This preserves connectivity with non-Windows systems that might implement only LAN Emulation or ATM ARP, but not both. Although ATM ARP will provide the most efficient transport, LAN Emulation is also required for NetBEUI, IPX, and similar protocols. And so the Windows implementation will allow either path.

Lowering the Cost of ATM Through Native Support

Providing native ATM support should help lower the cost of implementing ATM networks. Currently, the adapter vendor has to provide the signaling code and the LAN Emulation code to accommodate transports. When Microsoft provides ATM support at the operating system level, including the call manager and LAN Emulation client, vendors will be freed from the expense of either developing or licensing the complex signaling code.

Native ATM support will also help guarantee robustness, because the vendor has to write only a small amount of code. And as noted earlier, Microsoft native ATM support will expose APIs in kernel mode (through NDIS 4.1) and user mode (through WinSock 2.0), making it easier to write QoS applications.

Providing an SDK

Microsoft will provide a Software Developer's Kit for the makers of ATM network adapters to allow them to write NDIS 4.1 miniport drivers. Because LAN Emulation and signaling will be provided in the operating system, a miniport is basically a hardware interface, greatly simplifying code requirements for adapter vendors and reducing development time in bringing new products to market. The SDK is set for release for the fall of 1996, so developers can build and test their ATM miniport drivers even before Microsoft releases the operating system components that use the drivers, which are scheduled for delivery with Windows NT 5.0.

Summary

As ATM technology emerges as an important worldwide standard for the transmission of information, Microsoft is taking a leadership role in facilitating the deployment of ATM solutions on LANs and WANs. Microsoft's launching of the first ATM LAN Emulation logo and compatibility testing program will encourage third-party development of ATM interface products and enable customers to purchase ATM solutions with more confidence. Emulation is already being tested for Ethernet LANs, with emulation testing for token ring LANs to be added soon.

Launched with the release of Windows NT 4.0, the Ethernet LAN Emulation testing and logo certification program will soon be available for Windows 95. As this is written, ATM interface products, from Madge Network Systems, Adaptec, and Fore have already passed this logo testing process and are now identified in the Windows Hardware Compatibility List (HCL) accessible via the Microsoft Web site. Other vendors' ATM products are in the process of being tested.

ATM card vendors benefit from the testing and logo certification program, because it provides a standard measure of compatibility and performance for ATM card vendors that has, to date, been lacking.

Customers benefit from the program because it means they can buy ATM card products which have earned the Windows-compatible logo with confidence that the ATM cards will perform reliably with a broad set of applications and with the operating system and CPU type.

And everyone benefits from the efficiency and scalability of ATM technology, with its fixed-length cells that allow switching to be accomplished at the hardware level, doing away with the need for routers, buffering, and grand translation. ATM is independent of the physical network, running on twisted pair, coax, or fiber. And by design, ATM has no speed limits.

Microsoft's plans to support ATM natively will open the way for QoS-based real-time video and no-delay voice applications that will revolutionize the way in which the personal computer is used by opening new opportunities for software and hardware vendors, who will create rich—and perhaps as yet unimagined—experiences for end users.

For More Information

For the latest information on Windows NT Server, check out our World Wide Web site at https://www.microsoft.com/backoffice or the Windows NT Server Forum on the Microsoft Network (GO WORD: MSNTS).

Please refer to Microsoft's network communications and telephony web site for the latest in technology and solution information pertaining to ATM and related topics.

https://www.microsoft.com/NTServer/commserv/techdetails/overview/communiv.asp.

You can get more information about the ATM LAN Emulation testing program at this address: https://www.microsoft.com/hwtest/. From here, you can also link to the Windows Hardware Compatibility List.

Appendix A: An ATM Primer

Asynchronous Transfer Mode (ATM) is a type of digital packet-switching technology designed to relay and route network traffic by means of an address contained within the packet. ATM uses very short, fixed-length packets referred to as cells.

ATM cells have a fixed length of 53 bytes. By using fixed-length cells, the information can be transported in a predictable manner. This predictability accommodates different traffic types on the same network—for example, voice, data, and video.

The ATM cell is broken into two main sections, the header and the payload. The header (5 bytes) is the addressing mechanism and is significant for networking purposes as it defines how the cell is to be delivered. The payload (48 bytes) is the portion that carries the actual information—either voice, data, or video. (The payload is also referred to as the user information field.) An ATM cell is shown below.

Cc749988.vt1bx(en-us,TechNet.10).gif

ATM Model

The ATM protocol reference model (PRM) consists of three planes: a user plane to transport user information, a control plane to manage signaling information, and a management plane to maintain the network and carry out operational functions. The management plane is further subdivided into layer management and plane management to manage the different layers and planes.

Protocols of the control plane and the user plane include the following layers: the physical layer, the ATM layer, and the ATM adaptation layer. The ATM model can also include other layers users might want to add in addition to those listed. The following graphic shows the ATM model.

Cc749988.vt2bx(en-us,TechNet.10).gif

Physical Layer

The physical layer deals with physical medium issues such as voltage and bit timing. Because ATM has been designed to be independent of the transmission medium, it does not prescribe a specific set of rules, but rather it indicates that ATM cells may be sent on a wire or fiber by themselves or packaged inside the payload of other carrier systems.

ATM Layer

The ATM layer deals with cells and their transport; it defines cell layout and the meaning of header fields. The ATM layer includes two interfaces: the user-network interface (UNI) and the network-network interface (NNI). The UNI defines the boundary between a host and an ATM network (or the customer and the carrier). The NNI refers to the line between two ATM switches. The header field varies slightly at the UNI and NNI. The following table illustrates the bit allocation of a cell header for the two interfaces.

Function

UNI

NNI

Role

General Flow Control (GFC)

4

0

Used for physical access control.

Virtual Path Identifier (VPI)

8

12

Indicates a virtual path identification number to distinguish cells from the same connection.

Virtual Circuit Identifier (VCI)

16

16

Indicates a virtual circuit identification number to distinguish cells from the same connection.

Payload Type Identifier (PTI)

3

3

Indicates the presence of user information and whether the given ATM cell experienced traffic congestion.

Cell Loss Priority (CLP)

1

1

Indicates whether the corresponding byte may be discarded during a time of network congestion.

Header Error Control (HEC)

8

8

Used for sensing and correcting errors.

The ATM layer also deals with establishing and releasing virtual circuits, described in the next section. The ATM layer provides a transparent connection, called the ATM connection, to the higher layer—an end-to-end connection is established by a concatenation of connection elements.

Virtual Circuits and Virtual Paths

The two types of ATM connections are virtual circuits (VCs) and virtual paths (VPs). VC is a general term that signifies a logical unidirectional connection between two endpoints. ATM cells transported on a particular VC are associated by a common unique identifier referred to as a virtual circuit identifier (VCI). When a network connection is established, a route from the source computer to the destination computer is selected as part of the connection setup, and that route is used for all traffic flowing over the connection. An example of an end-to-end connection between two users, using only the VCI field of the ATM header, is presented next.

Suppose a virtual circuit consisting of a four-hop path is selected between two users, A and B, by a routing algorithm. After the network finds a path between the two nodes, it assigns the VCI values to be used at each node along the path and sets up routing tables at nodes N1 to N5. When the transmission starts, all cells of the connection follow the same path in the network. Once the communication is completed, one of the two end users releases the connection, and the VC is also terminated. The end-to-end connection defined by the concatenation of VC links is called a virtual circuit connection (VCC).

Virtual path (VP) is a generic term for a bundle of virtual circuit connections, all having the same VPI value and terminating at the same pair of endpoints. In ATM cells, a virtual path identifier (VPI) is assigned for each VP. When a VP is routed, all the VCs belonging to that VP are routed together.

The term virtual path connection (VPC) is used to refer to a sequential collection of VPs—VPC defines a route between the source and destination nodes.

ATM Adaptation Layer

The ATM adaptation layer (AAL) is composed of two sublayers, the convergence sublayer (CS), and the segmentation and reassembly (SAR) layer. The CS functions convert the user service information coming from the upper layer into a protocol data unit (PDU), and also carry out the opposite process.

The SAR layer segments PDUs to form the user data field of the ATM cell on the transmission side and puts them back together again at the destination side. The AAL's main function is to resolve any disparity between services provided by the ATM layer and the service requested by the user. The AAL adapts user service information with the ATM cell format and handles such issues as transmission error processing, misinserted or lost cell processing, flow control functions to meet the types of services demanded by the user, and timing control functions to restore the user signal.

ATM Connection Setup

ATM connections are established as either permanent virtual circuits (PVCs) or switched virtual circuits (SVCs). As their name implies, PVCs are always present, whereas SVCs must be established each time a connection is set up.

To set up a connection, a signaling circuit is used first. A signaling circuit is a pre-defined circuit (with VPI = 0 and VCI = 5) that is used to transfer signaling messages, which are in turn used for making and releasing calls or connections. If a connection request is successful, a new set of VPI and VCI values are allocated on which the parties that set up the call can send and receive data.

Six message types are used to establish virtual circuits, each message occupying one or more cells and containing the message type, length, and parameters. The following table lists these message types.

Message

Significance if sent by host

Significance if sent by the network

SETUP

Requests that a call be established

Indicates an incoming call

CALL PROCEEDING

Acknowledges the incoming call

Indicates the call request will be attempted

CONNECT

Indicates acceptance of the call

Indicates the call was accepted

CONNECT ACK

Acknowledges acceptance of the call

Acknowledges making the call

RELEASE

Requests that the call be terminated

Terminates the call

RELEASE ACK

Acknowledges releasing the call

Acknowledges releasing the call

The sequence for establishing and releasing a call is as follows:

  1. The host sends a SETUP message on the signaling circuit. \par

  2. The network responds by sending a CALL PROCEEDING message to acknowledge receiving the request. \par

  3. Along the route to the destination, each switch receiving the SETUP message acknowledges it by sending the CALL PROCEEDING message. \par

  4. When the SETUP message reaches its final destination, the receiving host responds by sending the CONNECT message to accept the call. \par

  5. The network sends a CONNECT ACK message to acknowledge receiving the CONNECT message. \par

  6. Along the route back to the sender, each switch that receives the CONNECT message acknowledges it by sending CONNECT ACK. \par

  7. To terminate the call, a host (either the caller or the receiver) sends a RELEASE message, causing the message to propagate to the other end of the connection, and then releasing the circuit. Again, the message is acknowledged at each switch along the way. \par

Sending Data to Multiple Receivers

In ATM networks, users can set up point-to-multipoint (P/MP) calls, with one sender and multiple receivers. A P/MP VC allows an endpoint called the root node to exchange data with a set of remote endpoints called leaves. To set up a point-to-multipoint call, a connection to one of the destinations is set up in the usual way. Once the connection is established, users can send the ADD PARTY message to attach a second destination to the VC returned by the previous call. To add receivers, users can then send additional ADD PARTY messages.

This process is similar to a user dialing multiple parties to set up a telephone conference call. One difference is that an ATM P/MP call doesn't allow data to be sent by parties towards the root (or the originator of the call). This is because the ATM Forum Standard UNI 3.1 restricts data flow on P/MP VCs to be from the root towards the leaves only.

ATM Switching

An ATM switch transports cells from the incoming links to the outgoing links, using information from the cell header and information stored at the switching node by the connection setup procedure. The connection setup procedure performs the following tasks:

  • Defines a unique connection identifier (a VPI and VCI) for each connection at the incoming link and link identifier, and a unique connection identifier at the outgoing link. \par

  • Sets up routing tables at each switching node to provide an association between the incoming and outgoing links for each connection. \par

As mentioned previously, virtual path identifier (VPI) and virtual circuit identifier (VCI) are the connection identifiers used in ATM cells. To uniquely identify each connection, VPIs are uniquely defined at each link, and VCIs are uniquely defined at each VP. To establish an end-to-end connection, a path from source to destination has to be determined first. Once the path has been established, the sequence of links to be used for the connection and their identifiers are known.

VPIs are used to reduce the processing of an ATM switch by routing on the VPI field only. For example, VPI routing is useful when many VCs share a common physical route (similar to all phone connections between Seattle and Chicago). The following graphic provides an example of VPI routing.

Cc749988.vt3bx(en-us,TechNet.10).gif

ATM Service Types

To control the various types of network traffic, ATM standards were modified to define the types of services most commonly used. The service categories are listed below.

Constant Bit Rate (CBR)

CBR services generate traffic at a constant rate. With this class of service, the end station must specify bandwidth at the time a network connection is established. The network then commits that bandwidth along the connection route; this ensures that no network traffic will be lost due to traffic congestion, because connections are admitted only if the requested bandwidth can be guaranteed.

Variable Bit Rate (VBR)

VBR is divided into two subclasses, Real-Time VBR (RT-VBR), and Non-Real-Time VBR (NRT-VBR). RT-VBR is intended for applications that have variable bit rates and stringent real-time requirements such as interactive compressed video—video conferencing, for example. Real-time services can deteriorate in quality or become unintelligible if the associated information is delayed; they are sensitive to the time it takes for the ATM cells to be transferred.

NRT-VBR is intended for traffic where timely delivery is not as important, that is, the quality of non-real time services is insensitive to delays in information transfer. An example of non-real-time service is data transmission.

Available Bit Rate (ABR)

ABR service is intended for bursty traffic whose bandwidth is known approximately. Burstiness can be defined as the ratio of peak-to-average traffic generation rate. With ABR, it is possible to specify, for example, a fixed capacity of 5 Mb/s between two points, with peaks of up to 10 Mb/s. The network guarantees to provide 5 Mb/s at all times and will do its best to provide the peak capacity when needed, but it does not guarantee peak capacity.

With ABR service, the network provides feedback to the sender, asking it to slow down when traffic congestion occurs.

Unspecified Bit Rate (UBR)

UBR service allows a connection to be established without specifying the bandwidth expected from the connection. The network makes no guarantees for UBR service: it establishes the route but does not commit bandwidth. UBR can be used for applications that have no delivery constraints and do their own error and flow control. Examples of potential uses of UBR are e-mail and file transfer, as neither application has real-time characteristics.

Quality of Service (QoS)

Quality of service is the user's view of the service. In connection-oriented networks, users establish an end-to-end connection in the network prior to transferring information. As part of this process, the users and the ATM network layer must negotiate a contract defining the service to set up a virtual connection between two or more nodes and specify connection parameters. The first part of the contract is the traffic descriptor, which characterizes the load to be offered. The second part of the contract specifies the quality of service desired by the user and accepted by the network operator (or carrier).

To provide traffic contracts, the ATM standard defines a number of QoS parameters. For each QoS parameter, a worst-case performance is specified, and the network operator is required to meet or exceed it.

Some of the most important QoS parameters are listed below.

Peak cell rate (PCR)

The maximum rate (in cells per second) at which the sender wants to send the cells.

Sustained cell rate (SCR)

The expected or required cell rate averaged over a long interval.

Minimum cell rate (MCR)

The minimum number of cells per second that the user considers acceptable. If the carrier cannot guarantee this much bandwidth, it must reject the connection.

Cell Delay Variation (CDV or Jitter)

How uniformly the ATM cells are delivered. ATM layer functions may alter the traffic characteristics by introducing cell delay variation. When cells from two or more ATM connections are multiplexed (MUX), cells of a given connection may be delayed when cells of another connection are being inserted at the output of the multiplexer. (Multiplexing at the transport layer refers to placing several transport connections onto one virtual circuit).

Cell Delay Variation Tolerance (CDVT)

The amount of variation present in cell transmission times. CDVT is specified independently for peak cell rate and sustained cell rate. For a perfect source operating at PCR, every cell will appear exactly 1/PCR after the previous cell. However, for a real source operating at PCR, some variation in cell transmission time will occur. CDVT controls the amount of variation acceptable using a leaky bucket algorithm, described later in this section.

Cell Loss Ratio (CLR)

The fraction of the transmitted cells that are not delivered or are delivered so late as to be useless (for example, for real-time traffic).

Cell Transfer Delay (CTD)

The average transit time from source to destination.

Cell Error Ratio (CER)

The fraction of cells that are delivered with one or more bits wrong.

Severely Errored Cell Block Ratio (SECBR)

The fraction of N-cell blocks of which M or more cells contain an error.

Cell Misinsertion Rate (CMR)

The number of cells/second that are delivered to the wrong destination because of an undetected error in the cell header.

The QoS of an ATM connection refers to the cell loss, the delay, and the delay variation incurred by the cells belonging to that connection. QoS of an ATM connection is closely linked to the bandwidth it uses. For example, using more bandwidth increases the cell loss, the delay, and the delay variation incurred, therefore decreasing the QoS for cells of all connections that share those resources.

ATM Traffic Control

Traffic control refers to a set of actions performed by the network to avoid congestion and to achieve predefined network performance objectives (for example, in terms of cell transfer delays or cell loss probability).

The ITU (International Telecommunications Union, formerly called CCITT) has defined a standard rule, called the Generic Cell Rate Algorithm (GCRA), to define the traffic parameters. This rule is used to unambiguously differentiate between conforming and nonconforming cells; that is, it provides a formal definition of traffic conformance to the negotiated traffic parameters.

Generic Cell Rate Algorithm

ITU recommendation I.371 defines two equivalent versions of the Generic Cell Rate Algorithm: the virtual scheduling (VS) algorithm, and continuous-state leaky bucket (LB) algorithm. For any sequence of cell arrival times, both algorithms determine the same cells to be conforming or non conforming.

GCRA uses the following parameters:

ta = cell arrival times

I = increment, or the distance between two consecutive cells

L = limit, represents a certain tolerance value

TAT = theoretically predicted cell arrival time

At the arrival of a cell, the VS algorithm calculates the theoretical arrival time of the cell, assuming equally spaced cells when the source is active. If the actual arrival time of a cell is after TAT - L, then the cell is conforming; otherwise, the cell arrived too early and is considered nonconforming.

The continuous-state leaky bucket algorithm can be viewed as a finite capacity bucket whose contents leak out at a continuous rate of 1 per time unit and whose contents are increased by I for each conforming cell. If at a cell arrival the content of the bucket is less than L, then the cell is conforming, otherwise it is non conforming. The capacity of the bucket is L + I.

For more information about the CGRA algorithm, refer to the ITU recommendation I.371.

Traffic Control Objectives

ATM traffic control has the following objectives:

  • To support a set of QoS classes sufficient for existing and foreseeable services \par

  • To maximize network resource utilization \par

  • To use network resources efficiently under any traffic circumstance \par

Two basic traffic control functions that ATM networks use to manage traffic are connection admission control (CAC) and usage parameter control (UPC), described below.

Connection Admission Control

Connection admission control (CAC) represents a set of actions carried out by the network during the call setup phase to accept or reject an ATM connection. If there are sufficient resources to accept the call request, and if the call assignment does not affect the performance quality of existing network services, then the call is granted. At call setup time, the user negotiates with the network to select the desired traffic characteristics.

Usage Parameter Control and Network Parameter Control

Using usage parameter control (UPC) and network parameter control (NPC), the network monitors user traffic volume and cell path validity. It monitors users' traffic parameters to ensure that they do not exceed the values negotiated at call setup time, and also monitors all connections crossing the user-network interface (UNI) or network-network interface (NNI).

The UPC algorithm must be capable of monitoring illegal traffic conditions, determining whether or not the confirmed parameters exceed the negotiated range limits, and must deal quickly with parameter usage violations. To deal with usage violations, the network can apply several measures, for example, discarding the cells in question or removing the connection that contains those cells.

Additional Control Functions

The following functions are used to support the actions of CAC and UPC/NPC.

Priority Control (PC)

Users can employ the cell loss priority (CLP) bit to create traffic flows of different priorities, allowing the network to selectively discard cells with low priority if necessary to protect those with high priority.

Traffic Shaping (TS)

Alters traffic characteristics of a stream of cells on a VC or VP connection. This can be used to ensure, for example, that the traffic connection crossing the user-network interface (UNI) is compliant to the user-network traffic or to maximize bandwidth resource utilization. Examples of traffic shaping include peak cell reduction, burst length limiting, and reduction of cell delay variation by suitably spacing cells over time.

Network Resource Management (NRM)

Represents provisions made to allocate network resources to separate network traffic flows according to service characteristics.