Low Latency Workloads Technologies

Ā 

Applies To: Windows Server 2012

This section provides overviews for the following group of technologies that are designed for or which were improved in Windows ServerĀ® 2012 to address low latency computing scenarios.

  1. Data Center Bridging

  2. Data Center Transmission Control Protocol (DCTCP)

  3. Kernel Mode Remote Direct Memory Access (kRDMA)

  4. Network Interface Card (NIC) Teaming

  5. NetworkDirect

  6. Receive Segment Coalescing (RSC)

  7. Receive Side Scaling (RSS)

  8. Registered Input-Output (RIO) API Extensions

  9. Transmission Control Protocol (TCP) Loopback Optimization

  10. Low Latency Workloads Management and Operations

Latency means delay, and it refers to the length of time that elapses between two specific events, such as the amount of time between the transmission and the reception of a network message between two computers over a network path. Latency has a variety of possible causes, including electrical propagation delays, processing delays, and queuing effects.

A variety of processing workloads require that the time spent on inter-machine communications is reduced to the lowest amount possible. These workloads include distributed computing algorithms whose convergence time is bound by the network latency. Examples of such systems include distributed consensus and agreement protocols, Message Passing Interface (MPI) workloads, and distributed caching. Stock trading and other financial markets workloads also require that the latency incurred by network communications is reduced to the greatest degree possible.

Low latency computing environments typically contain applications that require very fast inter-process communication (IPC) and inter-computer communications, a high degree of predictability regarding latency and transaction response times, and the ability to handle very high message rates. The following section contains information about technologies that you can use to improve performance in low latency computing scenarios.