Skip to main content

Scalable Networking Pack: Frequently Asked Questions

Published: May 4, 2006 | Updated: July 18, 2006
On This Page
 Scalable Networking Pack Basics
 Partners
 Scalable Networking Pack Technologies
 Deployment and Operations



 



Scalable Networking Pack Basics

Q. What is the Windows Server 2003 Scalable Networking Pack?
A.The Scalable Networking Pack adds new Windows Server 2003 (Server Pack 1 or later) architectural enhancements and application programming interfaces (APIs) to support the next generation of network acceleration and hardware-based offload technologies.

These Scalable Networking innovations help optimize Windows Server 2003 performance and maximize network throughput by reducing inefficient utilization of platform resources and offloading potentially CPU-intensive network packet processing to specialized network adapter hardware. This frees up CPU cycles for application related tasks, such as supporting more user sessions or responding faster to network application requests. Depending on the workload, this can reduce network packet processing overhead by 20 to 100 percent, resulting in significant server performance gains.
Q.What is included in the Scalable Networking Pack?
A.

The Windows Server 2003 architectural innovations introduced in the Scalable Networking Pack include:

TCP Chimney Offload – providing seamlessly integrated support for network adapters with TCP Offload Engines (TOE)

Receive-side Scaling – dynamically load-balances inbound network connections across multiple processors or cores

NetDMA – enables support for advanced direct memory access technologies, such as Intel I/O Acceleration Technology (Intel I/OAT) All of the networking innovations included in the Scalable Networking Pack require no changes to existing applications or network management tools, nor does it require administrator intervention. .

Q.How does the Scalable Networking Pack work?
A.

The architectural innovations included in the Scalable Networking Pack add seamlessly integrated support for the latest network acceleration and hardware-based offload technologies, such as TCP Offload Engines (TOE) and Intel I/O Acceleration Technology (Intel I/OAT). This enables automated and efficient delegation of network packet processing tasks (e.g. packet segmentation and reassembly) to a specialized network adapter. By reducing the processing overhead and removing potential operating system bottlenecks related to network packet processing, CPU cycles and memory bandwidth can be freed up for other application tasks, such as supporting more users sessions or processing application requests with lower latency. Additionally, support for Receive-side Scaling enables the load of inbound network traffic to be shared across multiple CPUs or cores to provide greater parallel processing. This removes previous architectural limitations for increased network throughput.

Q.What are the benefits of the Scalable Networking Pack?
A.

When combined with compatible network adapter hardware, the Scalable Networking Pack helps optimize server performance and maximize network throughput to achieve the operational gains made possible by today’s high-speed networks. This results in the ability to cost-effectively scale network-based, Windows Server 2003 applications and services to meet the growing demands placed on the IT infrastructure. Additionally, the architectural innovations introduced in the Scalable Networking Pack removes the need to make costly changes to existing applications or service configuration and does not require administrator intervention. These enhancements can also ease consolidation of server resources by providing better support for and utilization of multi-gigabit networks while preserving the Windows Server 2003 security, reliability, and application compatibility customers already rely on. Depending on the workload, this can reduce network packet processing overhead by 20 to 100 percent and increase network throughput up to 40 percent.

Q.Where can I get the Scalable Networking Pack?
A.

You can download the Microsoft Windows Server 2003 Scalable Networking Pack from here: http://support.microsoft.com/?kbid=912222

Q.What do I need to implement Scalable Networking Pack enhancements?
A.

The Scalable Networking Pack requires Windows Server 2003 (x86 and x64 editions) Service Pack 1 or later, and a compatible network adapter. This includes network adapters that implement a TCP Offload Engine (TOE) or support Receive-side Scaling. Support for NetDMA requires server equipment with compatible architectures, such as Intel I/O Acceleration Technology. Customers will benefit from the greatest flexibility in selecting the technologies that best fit your needs from a rich ecosystem of hardware vendors offering support through new add-on network adapters or as part of the next generation of server platform hardware. The Scalable Networking Pack is also available for Windows XP Professional x64 Edition.

Q.Is the Scalable Networking Pack supported on Windows Server 2003 for Itanium-based systems?
A.

No. Itanium-based systems support is planned for Windows Server Code Name “Longhorn.”

Partners

Q. With which companies are you partnering to provide network adapter support for Scalable Networking Pack enhancements?
A.

Customers will benefit from a rich ecosystem of independent hardware vendors (IHV) and original equipment manufacturers (OEM) hardware solutions partners offering support for Scalable Networking Pack enhancements. This includes a wide selection of add-on network interface cards (NIC) and LAN-on-motherboard (LOM) options. For more information, see Scalable Networking Partners.

Q.Will Windows Server 2003 OEMs offer support for Scalable Networking Pack enhancements??
A.

Yes. A long list of familiar Windows Server 2003 server hardware providers will be offering integrated support for Scalable Networking Pack technologies in their next generation server platform offerings. For more information, see Scalable Networking Partners.

Scalable Networking Pack Technologies

Q.What is TCP Chimney Offload?
A.

TCP Chimney Offload provides automated, stateful offload of Transmission Control Protocol (TCP) traffic processing to a specialized network adapter implementing a TCP Offload Engine (TOE). The stateful capabilities—meaning that the network adapter retains in memory the significant attributes of a connection, such as IP address, ports being used, and packet sequence numbers—significantly reduce the need for CPU cycles in managing offloaded traffic. For long-lived connections with large-sized packet payloads; like those associated with storage workloads, multimedia streaming, and other content-heavy applications; TCP Chimney Offload greatly reduces CPU overhead by delegating network packet processing tasks, including packet segmentation and reassembly, to the network adapter. This frees up CPU cycles for other application tasks, such as supporting more users sessions or processing application requests with lower latency. ”

Q.What is Receive-side Scaling??
A.

Receive-side Scaling enables the processing of inbound (received) networking traffic to be shared across multiple CPUs or cores by leveraging new network adapter enhancements. Receive-side Scaling can dynamically share the inbound network traffic as either system load, or network conditions vary. Many scenarios--including Web servers, file transfers, block storage, and backups--require the host protocol stack to perform significant work in the context of receive interrupt processing and deferred procedure calls (DPC). In these scenarios and others, Receive-side Scaling can significantly improve the number of transactions per second, the number of connections per second, or total network throughput.

Q.What is NetDMA?
A.

NetDMA enables support for advanced direct memory access technologies, such as Intel I/O Acceleration Technology (Intel I/OAT). For servers equipped with the supported technology, NetDMA provides memory management efficiencies and network packet processing enhancements. At the heart of NetDMA is the ability to more efficiently support network data movement and reduce system overhead by minimizing CPU involvement in performing memory-to-memory data transfers. Normally the CPU is extensively involved in moving network data from network adapter receive buffers into application buffers. NetDMA largely frees the CPU from handling memory transfers by supporting use of a DMA engine. The DMA engine frees the CPU from the mundane task of copying data so that it can be better used by other applications.

Q.What is a TCP Offload Engine?
A.

A TCP Offload Engine (TOE) is a specialized and dedicated processor on a network adapter that can handle some or all of the processing of network packets. By handling all tasks associated with protocol processing, TCP Offload Engines can relieve the main system processors of this work.

Q.What is Intel I/O Acceleration Technology?
A.

Intel® I/O Acceleration Technology (Intel® I/OAT) moves network data more efficiently through Intel® Xeon® processor-based servers for fast, scaleable, and reliable networking. Intel I/OAT helps provide network acceleration that scales seamlessly across multiple Ethernet ports. For more information on Intel I/OAT, see www.intel.com/go/ioat.

Deployment and Operations

Q.What types of workloads can benefit from Scalable Networking Pack enhancements?
A.

The Scalable Networking Pack can improve the performance and scalability of such data-heavy workloads as file storage, backups, Web servers, and media-streaming. Depending on the workload, overall performance gains can range from 20 percent up to nearly 100 percent reduction of network packet processing overhead and increase network throughput up to 40 percent

Q.Can both TCP Chimney Offload and NetDMA be enabled at the same time?
A.

No. If the Scalable Networking Pack detects that the network adaptor can support both NetDMA and TCP Chimney Offload, NetDMA will be disabled and TCP Chimney Offload will remain enabled. Additionally, if a network adapter supports Receive-side Scaling, this capability can be used across all TCP connections, including connections that are offloaded through TCP Chimney Offload.

Q.How do I know network connections are being offloaded?
A.

For TCP Chimney Offload, administrators can use the netstat.exe command to display current TCP/IP network connections and their state. From a command prompt, you can run the netstat –t command to display a list of TCP Chimney Offloaded connections. An offloaded network connection can be in one of the following states:

In Host – the network connection is being handled by the host CPU

Offloading – the network connection is in the process of being transferred to the offload target

Uploading – the network connection is in the process of being transferred back to the host CPU

Offloaded – the network connection is being handled by the offload target

Q. Does the Scalable Networking Pack support NIC teaming adapters?
A.Yes. For example, TCP Chimney Offload provides compatibility with a variety of intermediate driver solutions, such as “teaming” several network adapters to create a single virtual network adapter (for better fault tolerance or load balancing), and support of multiple Virtual LANs (VLAN). If the intermediate driver (e.g. teaming driver) has not been updated to support TCP Chimney Offload or does not wish to enable TCP Chimney Offload, then network connections will not be offloaded to the network adapter and will be handled by the host CPU. Refer to your NIC teaming solution provider for specific support details and requirements.
Q. Can I configure which network connections are not offloaded?
A.Yes. For example, TCP Chimney Offload allows administrative configuration of TCP port numbers (source and destination) for which connections are not to be offloaded.
Q.Are there types of network connections that cannot be offloaded??
A.To help ensure that TCP Chimney Offload will not reduce the capabilities of existing and future Microsoft Windows network stacks, it is designed so that if a connection needs capabilities not provided by the offload target (e.g. IPsec packet processing), the connection will not be offloaded. Also note that TCP Chimney Offload provides the greatest value when handling large data transfers. If an application’s workload is predominantly made up of small data transfers, as with some client/server database transactions, the application may see little benefit from using TCP Chimney Offload, because there is comparatively little work in transferring the data, and most of the CPU’s time is spent alerting the application that the buffer data transfer has been completed.




Top of page