Network Infrastructure: Moving to Gigabit Ethernet and ATM

Archived content. No warranty is made as to technical accuracy. Content may contain URLs that were valid when originally published, but now link to sites or pages that no longer exist.

By Andy Weeks

Published in TechRepublic's Windows NT Enterprise Strategies

The adoption of Ethernet and 10BASE-T as networking standards revolutionized corporate computing by driving down per-unit pricing and increasing bandwidth capacity. In the same way, the rise of new technologies like Fast Ethernet, Gigabit Ethernet, and ATM is revolutionizing IT by increasing the bandwidth available to each desktop while reducing the cost to move data across your network.

Smart organizations are:

  • Replacing aging cabling plants with UTP

  • Replacing backbone networks with a switching fabric

  • Migrating workstations to high-bandwidth connections as business needs dictate

  • Evaluating the business case for integrating voice, video, and data networks using switching technology

Let's face it. Networking infrastructure simply isn't that interesting. Although CIOs, CTOs, and other IT executives understand the importance of the infrastructure that underlies their networks, no one gets excited about it. Upgrading to Fast Ethernet just doesn't provide the same thrill as a migration from Windows NT 4.0 to Windows 2000.

While understandable, this view is shortsighted. If your organization is large enough, the chances are good that right now you've got system engineers and network administrators troubleshooting performance issues that don't have anything to do with your Network Operating System (NOS) but are instead caused by topology or routing errors.

Further, the focus of networking has changed over the past decade. Early network design activities focused on the physical aspects of the network. Now, network structure is the primary consideration. This new focus allows business needs to drive network designnot the other way around.

In this briefing, we'll examine the current state of network infrastructure. We'll look at the emerging technologies driving infrastructure change and how you can position your organization to take advantage of these new technologies.

On This Page

Ethernet and 10BASE-T
The Route to Performance
More Bandwidth: Fast Ethernet
Switching to New Heights
Gigabit Technologies
ATM: Not Just a 24-Hour Bank
What's Next?
Additional Information

Ethernet and 10BASE-T

Computer networking has been around for almost as long as computers. For example, SNA (Systems Networking Architecture) was developed by IBM to connect the various parts of a mainframe computer.

Three networking topologies defined early PC networking: Ethernet, ArcNet, and Token Ring. (Of course, there were many more in the beginning, but these three were the only ones to gain any market share.) Of these three, Ethernet emerged as the industry standard for most applications.

Ironically, Ethernet wasn't created for personal computers at all. In fact, the technology was developed to allow Digital and Xerox minicomputers to talk to each other, but it was adapted for PC and Macintosh networking. The original Ethernet system had a theoretical bandwidth of 10 Mbps, although Ethernet systems had a tendency to clog with traffic once you put 60 to 70 workstations onto a single segment.

The modern age of networking arrived in the form of a product called LattisNet from a small company named Synoptics (now industry giant Bay Networks). LattisNet allowed Ethernet to run across unshielded twisted-pair (UTP) cabling. This cable was smaller, more flexible, and much less expensive to install than coaxial cable. LattisNet systems also were more flexible and more reliable than the early Ethernet systems they replaced.

When the key components of LattisNet, now called 10BASE-T, were incorporated into the IEEE Ethernet 802.3 standard, the networking industry had the "stamp of approval" necessary for widespread implementations. The industry became more price-driven, cutting equipment costs dramatically. Costs soon broke the $100-per-port mark, making pervasive networking financially reasonable. NICs (Network Interface Cards), for example, became a common commodity.

Old Infrastructure (circa 1996)

New Infrastructure (circa 2003)

Network Elements

UTP Cabling
10BASE-T to workstation
10BASE-T to servers
Stackable workgroup hubs
Backbone Router
WAN Router connected via T1 (1.544 Mbps) to WAN provider

UTP/Fiber Cabling
Fast Ethernet to workstation
ATM to servers
Workgroup switches
ATM backbone switch
ATM connection to WAN provider via OC-3 (155 Mbps) or OC-12 (655 Mbps)

Per workstation bandwidth

10 Mbps (shared among all workstations on local segment)

100 Mbps dedicated to each workstation

Primary Applications

File and print services
Client/server applications
Main frame access

File and print services
Internet access
Client/server applications
Data warehouse access
Full motion video
Voice communications

Table 1 Network Comparison

The Route to Performance

Both Ethernet and Token Ring work on the concept of shared bandwidth. Basically, a network is like a water pipe. Each workstation and server can put drops of water into the pipe, but once the pipe is full, no more can be added until some of the drops begin moving to their destination.

As networks grow beyond these limitations, engineers must look at ways to support more workstations and servers. Using bridges and routers, networks can be segmented and larger internetworks created. Basically, bridges are unintelligent devices that forward all information from one segment to another segment (including garbage and local broadcast traffic). A router provides more intelligence and forwards only information that is targeted for the next segment or beyond.

Routers play a huge role in network design. Their strength is the ability to intelligently analyze network traffic and determine the most efficient route to send information from place to place. In some ways, this is also their weakness. It takes massive computing ability to look at each piece of information as it comes into the router and decide what to do with it. Each router that a particular packet of data passes through slows down the packet's trip from source to destination. Some complicated hierarchical network designs called for information to flow through five or more routers to get from point to point.

Routers are still required elements at the boundary of the network. They provide the link between campus networks and a company's Wide Area Network (WAN) as well as the Internet. Because of bandwidth and security concerns, companies use routers to ensure that only traffic intended to leave the LAN is forwarded onto the WAN. In fact, a special class of routers, called firewalls, has evolved to provide highly secure connectivity to public networks like the Internet.

From a design perspective, an all-router network still relies on the idea of shared bandwidth. Routers basically move data from one pipe to another. As a result, when traffic on the network grows, routers can actually contribute to the congestion they were installed to prevent. The only way to increase performance in a routed network is by increasing the bandwidth of each component. Therefore, the focus of router manufacturers tends to be on developing superfast routers, increasing network throughput and continually increasing the performance of NICs.

More Bandwidth: Fast Ethernet

One of the major improvements in overall network throughput has been increased network bandwidththe size of the pipe. Several technology advances have contributed to this revolution. The first was adapting Token Ring to a fiberoptic networking scheme called FDDI (Fiber Distributed Digital Interface). FDDI is a 100-Mbps technology, providing a 6x increase over traditional Token Ring. FDDI never caught on as a workstation networking technology because it requires expensive fiber optic cabling. However, it continues to see substantial utilization as a backbone network. A backbone is basically a network that connects other networks.

More importantly, FDDI provided much of the technical basis for the development of 100-Mbps Ethernet, also known as Fast Ethernet. Fast Ethernet is a 100-Mbps technology that is currently supported by almost all of the major networking hardware vendors and has been accepted as the IEEE 802.3u standard. Because of the number of manufacturers looking to gain an edge in this new market, vendors continue to increase performance and capability while reducing costs. In fact, Fast Ethernet NICs are cheaper than regular Ethernet NICs were a few years ago.

Many companies have standardized on Fast Ethernet for new network implementations. In addition, some vendors are providing hubs that allow 10-Mbps and 100-Mbps Ethernet devices to be connected simultaneously to the same network. This allows companies to gradually transition to new technology without having to make a single, huge investment.

Switching to New Heights

Like LattisNet, the next revolution in networking came through a shift in thinking. Instead of focusing on increasing the size of a single pipe, switching technology allows the network to look more like a collection of point-to-point connections. Switching works by switching packets from source to destination directly. In essence, a switch looks like a large router with a port for every device on the network. Packets do not contend for bandwidth because they never actually go onto the network but are cut through directly to their destination. In addition, switches work at the hardware level, so they move packets an order of magnitude faster than current-generation routers at similar price points.

Switches can be used as drop-in replacements for network hubs, sometimes dramatically increasing performance. They can also replace routers and backbone networks.

It's important to note that switches vary in capacity. This is typically expressed as backplane bandwidth. A desktop switch may efficiently forward data for 5 to 10 workstations, while a workgroup-class switch may be able to efficiently switch packets for 5 to 10 desktop switches and servers. A backbone switch, on the other hand, can aggregate data for many workgroup switchesin essence the entire corporate network. Costs scale accordingly.

Switches continue to be relatively expensive devices, at least when compared to the cost of network hubs. In practice, this has forced many companies to move toward a switching architecture for their network backbone, while continuing to use 10- or 100-Mbps hubs at the workgroup level.

Ultimately, as switching technology matures, costs will continue to fall. Many companies will consider implementing a switched network fabric, with switches at all levels of the network design. This will allow increased throughput and additional redundancy.

Gigabit Technologies

Although most vendors remain focused on switching technologies, the race for more bandwidth continues. The next horizon in networking bandwidth is at 1,000 Mbps, or the Gigabit level. The IEEE standards body has accepted Gigabit Ethernet as standard 802.3z.

Gigabit Ethernet is evolutionary technology. Basically, the Ethernet topology was grafted onto Fibre Channel, a previously developed high-bandwidth transport. As a result, the development time for Gigabit Ethernet was very short (even by high-tech standards), and R&D costs were minimal. Today, Gigabit Ethernet runs on fiber-optic cable. Work is under way to complete an IEEE standard for Gigabit Ethernet over UT. In fact, this standard is now likely three to six months from completion.

At this point, Gigabit Ethernet offers no additional functionality but simply raises the performance bar for networking by an order of magnitude. Like switching, Gigabit Ethernet will most likely be used first for backbone. As more vendors implement the standard and costs begin to fall, the technology will likely find use for workstation connectivity, especially in resource-intensive applications like CAD, 3-D Graphics, streaming video, and database manipulation.

ATM: Not Just a 24-Hour Bank

All the technologies we've discussed so far are limited to data applications. One of the trends that we'll see in the next decade is the consolidation of different types of communications onto a single delivery mechanism. Examples include voice and video traffic in addition to data. This has a number of advantages for enterprises. Today, companies must run multiple cables to each desktop to support networking, telephony, and video applications. In addition, each technology requires a separate back-end infrastructure. Consolidating to a single delivery mechanism reduces the installation costs, but more important, promises to significantly reduce operational and support costs.

Some attempts have been made to adapt current data transports to carry audio and video signals. For example, VOIP (Voice Over Internet Protocol) allows voice conversations to occur over the Internet. Unfortunately, although technically feasible, there are some drawbacks to using digital data transports for analog signals such as video and voice. Business-quality audio signals, for example, require continuously guaranteed bandwidth. Imagine a telephone conversation where every other word is either missing or in the wrong orderor a conversation where there's a two-second pause between each sentence. Although this is an oversimplification, it does serve to illustrate the nature of the problem.

The first technology that truly integrates all these others grew from the telephone industry. Asynchronous Transfer Mode, or ATM, is a switched technology designed from the ground up to support voice, video, and data. The technology involves a number of elements, but the key technologies include small packets, packet prioritization, and highly efficient switching. In addition, ATM, like Ethernet, is easily scaleable to increase bandwidth. Today, 25-Mbps and 155-Mbps ATM products are available, and technology that provides 655 Mbps and 2.5 Gbps is on the drawing boards.

The primary drawback to ATM today is that it truly is a completely new technology. Ethernet Switching, Fast Ethernet, and Gigabit Ethernet are evolutionary technologies. They can be installed incrementally without completely restructuring the network. ATM is different all the way down to the way data is moved from point to point. As a result, vendors have had to do a lot of work to create an emulation layer that allows ATM to look to the higher-level protocols like other, more familiar topologies, such as Ethernet.

The first implementers of ATM are almost universally local and long-distance telephone companies. They're using ATM to replace aging voice-switching networks. Telephone companies require high-availability systems with massive capacity, and are willing to pay the price. This massive influx of dollars is subsidizing R&D in the ATM industry that will heighten the technology bar while decreasing costs.

The final advantage of ATM is that it will allow for a truly seamless network. Today when a company sets up a WAN, it typically leases communications lines from a telephone company and uses proprietary equipment to connect to those lines. With ATM, a company will need only to connect its ATM network to the local telco's ATM network. This will reduce the cost of wide-area networking while increasing flexibility and bandwidth.

In practice, some companies are looking at ATM as a backbone technology, not because it provides dramatic advantages over switching but because it is an investment in the future. Companies that have to replace their main telephone switch, for example, may implement ATM simultaneously to take advantage of consolidated voice and data.

We are likely to see hybrid network approaches, with ATM as the backbone and Fast Ethernet to the desktop. ATM will probably never become a mainstream desktop technology simply because of the dominance (and market acceptance) of Ethernet. This is another example where technical superiority does not guarantee market share.

What's Next?

Whatever else happens, we can count on two things remaining true: Bandwidth will increase and per-Mbps costs will fall correspondingly. Gigabit technologies are simply the next stop on the bandwidth express. Over the next five years, we can expect multi-gigabit bandwidth, heading towards 10- and 100-Gbps.

We can also expect costs to fall. As technologies mature and manufacturers recoup their R&D costs, products become commoditized and prices fall until they approach raw material cost.

Per-workstation costs for mature technologies will trend below $100. For example, 10-Mbps Ethernet has been below this mark for several years. 100-Mbps Ethernet is quickly approaching this milestone. Once 100-Mbps crosses this threshold, new purchases of 10-Mbps products will drop, and in fact we are seeing this trend today.

Most companies are heading towards implementing their second- or third-generation network infrastructures. The average age of a corporate network infrastructure will drop as the pace of change in networking technology continues to increase.

ATM and other switching technologies will continue to mature. These products will cross over from specialty applications to the mainstream as they achieve cost and performance stability and standardization. Standardization may come from external sources, such as the IEEE standards organization, or from de facto status brought on by sheer market share.

Andy Weeks is the Director of Consulting for Koinonia Computing. He has worked in the Information Technology field for over a decade as an end-user support manager, network architect, and most recently as a business process consultant. You can reach him at aweeks@koincompute.com.

Additional Information

For more information or to subscribe, go to the TechRepublic web site at
https://www.techrepublic.com .

We at Microsoft Corporation hope that the information in this work is valuable to you. Your use of the information contained in this work, however, is at your sole risk. All information in this work is provided "as -is", without any warranty, whether express or implied, of its accuracy, completeness, fitness for a particular purpose, title or non-infringement, and none of the third-party products or information mentioned in the work are authored, recommended, supported or guaranteed by Microsoft Corporation. Microsoft Corporation shall not be liable for any damages you may sustain by using this information, whether direct, indirect, special, incidental or consequential, even if it has been advised of the possibility of such damages.