The Basic Web
Chapter 10 from Step Up to Networking, published by Microsoft Press
The World Wide Web certainly has come a long way from Tim Berners-Lee's original vision back at the CERN particle physics laboratory in Switzerland. Though it is still thoroughly grounded in the use of hyperlinks for navigating from document to document, the Web is a much bigger, more colorful, and—frankly—far noisier place than it once was.
Today, it has also grown beyond its academic bounds to become a considerable force in both personal and business computing. It now represents not only a vast electronic library of information but also a new electronic marketplace that growing numbers of businesses see as a critically important, rapidly evolving arena for selling, marketing, advertising, buying, and even financing purchases.
It's a little difficult to tell: was it the explosion of new users that encouraged businesses to venture onto the Web, or was it the presence of well-known businesses that attracted tens of millions of new "surfers" to this virtual universe? In the end, resolving this chicken-or-egg question doesn't really matter. Today, the Web is a fact of computing life and, in little more than a few years, it has become an everyday source of information, entertainment, and shopping. Web technologies and protocols have standardized business computing to the point that intranets have become integral to many corporations, extranets are becoming increasingly useful tools for businesses to interact with customers and partners, and e-commerce is projected to grow by the billions of dollars in the next few years.
To paraphrase a recent American presidential campaign, "it's the Web, stupid." And since the Web also happens to be the ultimate in networks, it's a more than fitting "place" to end a survey of networks and networking.
On This Page
The Internet and the World Wide Web
Everyone these days seems to use Internet and World Wide Web more or less interchangeably. But as you already know, they are not the same thing. The Internet is the vast, interconnected collection of servers and networks attached to backbones around the world. The Web is but a portion of the Internet, even though many, if not most, end users think of it as "the Internet."
So what makes the Web the Web, and how do people find their way around it? If you've used the Web, of course you know the answer already. But take a quick look behind the scenes, so to speak, at what's involved when you open a browser window, type a site name in the address bar, and press Enter.
Although people talk about surfing the Web as if it were some geographic locality with well-defined borders, it is not. It is not even a place. It is a collection of documents. These documents are known as pages, and collectively these pages make up the millions of sites that anyone with Internet access can visit at will.
These pages and sites together present information in the colorful, sometimes eye-searing format that includes not only text but graphics, sound, animation, video, and the omnipresent links that require nothing more than a mouse to enable a visitor to navigate from page to page and from site to site. With recent (in the past few years) advances in Web technology, pages even can contain small programs—scripts, applets, ActiveX controls—that add interactive capabilities that allow the user to do things beyond merely viewing the page.
There are literally uncounted millions of pages out there on the Web. But do they just float around like leaves in the autumn breeze? Snicker…obviously not. They are organized hierarchically within individual Web sites. Some of these sites are one-stop, single-page affairs. More often, however, they are collections of related pages arranged somewhat like chapters in a book. Depending on the site, these chapters can consist of a relatively few pages, as the one at the top of the next page does.
Illustration courtesy of Little Bit Therapeutic Riding Center and Current ImageSM graphic art and Web site design.
Or they can consist of hundreds or even thousands of related pages, like those in the Microsoft Web site illustrated earlier.
But no matter the size of the site, any or all of these pages can contain links to other pages, either in the same or in related sites, and a visitor can work through the site's hierarchy by starting at the main, or home, page. This home page provides access to the various subcollections of pages that make up the site. In the case of Microsoft's Web site, for example, the home page guides visitors to subsets of pages on products, support, events, and so on. To see the pages in a particular subset, the visitor points and clicks on the desired topic and, when that topic's main page appears, points and clicks again to view specific pages, as shown at the top of the next page.
Web Addresses and URLs
But how does someone tell a computer—a nonthinking piece of machinery—how to find a particular site and page? By its address. Each site is a collection of documents put together and maintained by a single individual or organization—a collection stored on one or more servers—and these documents are accessed, in the case of the Web, by an address known as a Uniform Resource Locator, or URL.
URLs look a lot like path/filename combinations and, like them, they are used to specify the exact location and name of a resource on the Web. Although URLs differ as much as any street addresses, they all take the following form:
Protocol specifies the protocol needed to access the resource. For the Web, this is usually the protocol named HTTP (Hypertext Transfer Protocol), which is described in more detail a little later.
www specifies a site on the World Wide Web. This part of the URL is shown in brackets here because some, but not all, browser applications automatically insert the letters for the user.
Servername is the name of the computer on which the resource is stored. This is the part of the URL most people think of as the actual Web site—for example, microsoft.com or whitehouse.gov. The site isn't, however, an actual place, so to avoid confusion it's good to think of it, at least now and then, as a computer rather than as a storefront or library or whatever physical building comes to mind when you think of a Web site.
Path is the route to the actual resource, including the name and type of the document to be displayed.
So, for example, the following URL:
tells you (actually the browser software that sends the request and displays the result) to use the HTTP protocol to connect to the World Wide Web (www) server named microsoft.com and display the document named ie40.htm. (The htm is a filename extension that identifies the document as an HTML—Web—document.)
This URL is what people type into the address bar of a browser window or, sometimes, click in an e-mail message or a document if they are running a Web-enabled application, such as Microsoft Word or Microsoft Outlook. It is also the path—unseen by the user—that browser software follows whenever a link is clicked on a Web page.
So far, so good. Open a browser window and type the URL you want, or point, click, and follow a link to view a Web page. But just how does that page, often consisting of many different kinds of information, manage to be displayed on screen? For that matter, how is it that someone can click a link and immediately—or sometimes not so immediately—be transported to an entirely different page or Web site? What makes a link a link, and how does it differ from the remaining text and other objects on the page (besides usually being displayed in a different color)?
The key to all this is the Web browser, the software that finds Web sites and then translates the codes that describe the page in order to produce the display that appears on screen. Whether built as a standalone application (as is Netscape Navigator) that runs on top of an operating system or as part of an operating system (as is Internet Explorer), browser software is designed to understand Web technologies and to display pages correctly.
Just as a word processor or a spreadsheet program must understand the hidden codes and commands that determine how a letter, a chart, or a budget must be displayed, browser software must understand the codes and commands embedded in a Web page that determine how, where, and in what fonts and colors the elements on the page must be displayed within the browser window. Those codes and commands are part of the Web markup language known as HTML, or Hypertext Markup Language.
HTML: The Language of the Web
HTML is the universal language of Web page creation. To the uninitiated, it is as cryptic as a foreign language and as enlightening as mud. To the initiated and to browser software, however, HTML describes a Web page and everything on it with crystalline clarity (most of the time, anyway).
To understand a little bit of what HTML is all about, start by examining its name:
Hypertext refers to the fact that HTML is designed to describe hypertext, that is, Web documents. (In actuality, Web documents are better described as hypermedia because they contain more than plain text. However, hypertext is at the root of the Web, so continuing use of the word is, if nothing else, a nice tribute to the Web's origins.)
Markup refers to the fact that HTML is used to mark up documents. That is, it describes elements on a page in much the same way an editor (of the human variety) describes the way a printed page should look by marking up a manuscript with special codes and symbols for italics, boldface, indented paragraphs, and so on.
Language refers to the fact that HTML, like any other language, is based on certain codes and conventions that enable anyone familiar with them to read and understand "sentences" written in HTML.
Entire books have been and are being written about HTML and how to use it. Unless you plan to specialize in Web page design and creation, there's no need to know the intricacies of HTML in any great detail, other than to satisfy curiosity. However, it doesn't hurt to understand at least a little about how it works, so here goes.
HTML is based on the concept of embedded tags that define certain properties of a document. What kinds of properties? There are lots of them, including such easy-to-understand ones as those that indicate where a new paragraph begins, where boldfacing begins and ends, and where an image should appear. Tags are enclosed in angle brackets, like this:
and they sometimes appear in pairs, in which case they take the form:
(Note the / preceding the ending tag.)
So, for example, a Web designer wanting to start a new paragraph would use a new paragraph tag to show where the paragraph begins, like so:
and would enclose text to be boldfaced in begin-boldface and end-boldface tags like this:
<B>This text is bold</B>
HTML includes a number of common tags, including one special one known as an anchor. This particular tag is used to indicate a link, a hypertext reference (HREF) to a specific URL. An anchor begins with the characters A HREF, followed by the URL and the "friendly" text (or image) that will be used to represent the URL in the document, and it ends with the characters /A. So, for example:
shows that the highlighted or underlined word Microsoft on a Web page would be associated with the Microsoft home page described by the link http://www.microsoft.com.
In addition to tags in general, HTML is also characterized by the way it divides the coding of a document into complementary sections called the head and the body. The head section, marked off by the <HEAD> and </HEAD> tags, describes the document itself—for example, the document title. The body section, marked by the <BODY> and </BODY> tags, contains the actual document content—text, images, sound files, and so on—plus, of course, the tags that describe how the body of the document appears on screen.
So you can see how these and other HTML elements are actually used, the following illustration shows a small portion of the HTML coding for a real Web page:
And this is the page that coding describes:
Some difference, but without the former, you would never see the latter.
HTTP: The Web Transport Service
HTML is what makes page display possible within a browser window. It does not, however, transport a page to the browser. That job belongs to the Web protocol identified by the near-ubiquitous letters http, which appear at the beginning of every URL clicked or typed to visit a Web site or to request a specific document within the site.
HTTP, the Hypertext Transfer Protocol, operates between two, and only two, types of entities, Web browsers and Web servers, and its function is equally clear-cut: it carries requests from browsers to servers, and it transports requested pages (if available) from servers back to browsers.
HTTP is an object-oriented protocol, which—in part—means that it relies on commands known as methods to work with Web pages (the objects). These HTTP methods include a number of commands whose meanings are relatively easy to interpret. For example:
GET is a request to read a page.
HEAD is a request to read the header of a page—for example, to determine when it was last modified.
PUT is a request to store a page on a server.
POST is a request to append, or add, information to a resource identified by an URL—for example, to post a response to a bulletin board.
A typical HTTP interaction between a browser and a Web server would thus be a relatively simple two-step process like this:
The browser sends a request to a server by using an HTTP command, such as GET to request a particular Web page.
The server finds the page, if it can, and sends it back to the browser. To let the browser know how its request fared, the server also sends back one of several numeric messages. If the server is able to carry out the request, for example, its return message would be the HTTP response signaling "success." If the request failed for some reason, the server would return a signal indicating the type of error—for example, the numeric response indicating "unable to carry out the request."
Although HTTP is widely used and has, in fact, deliberately been left open to improvement and evolution, it was not designed with high security in mind. However, HTTP has been extended to meet security concerns in a form known as SHTTP (Secure HTTP), a development that adds encryption and security features to HTTP. Also, and rather confusingly, there is another form of HTTP that is sometimes called Secure HTTP and sometimes called HTTP Secure. Abbreviated in URLs as HTTPS, it is a protocol developed by Netscape for encrypting pages and accessing Web servers through a secure port. HTTPS essentially allows HTTP to run on top of a Netscape-devised security layer known as SSL (briefly described later in this chapter). Both of these security-minded extensions of HTTP were designed to support privacy and commercial transactions on the Web.
Note: Encryption and other approaches to ensuring security on the Internet and the World Wide Web are described in the "Security" section later in this chapter.
Businesses and the Web
Web technologies already play important parts in business networking and seem destined to become even more important as the Internet grows faster and more sophisticated. The Web's all-important hyperlinks, for instance, are now routinely embedded as "live" links within documents and e-mail, providing users with the ability to jump from information source to information source as randomly (and even illogically) as their needs or moods take them.
On a larger scale, Web technologies are also causing the line between the network "out there" and the network "within" to grow fainter all the time. Features that originated in browser software, for example, now contribute to functions as basic as file management and display, to provide a degree of consistency across applications. More importantly, these features also help end-user software erase the difference between local and remote files, so that users can now concentrate on what they want to see, rather than where they must look to find it.
On an even larger scale, in the past few years Internet and Web technologies have become deeply ingrained in business networking. Businesses in increasing numbers are establishing their presence on the Web, and in many cases they are providing Internet/Web access to employees who need, or at least can make use of, the many resources to be found on the global network. Internet technologies are also becoming basic to business communications and telecommuting.
Within the enterprise, large corporations are finding that intranets provide an easy-to-use, secure, and cost-effective means of distributing information of all types. In addition, some businesses are even opening up their intranets on a limited basis to trusted outsiders—vendors, partners, suppliers, and so on. And finally, there is e-commerce, the latest, greatest use of the Internet that, more than any before, has perked up the ears of corporations and governmental agencies alike because of its growing economic clout and mass-market appeal, even to computing "newbies."
Intranets and Extranets
Probably, the most significant application of Web-related technologies to corporate networks is the creation of intranets and extranets, both of which resemble miniature Internets—or, rather, miniature Webs. Both intranets and extranets rely on browser software as the key to accessing, viewing, and using the applications and documents they are based on. Intranets are internal to a corporation. Extranets take the concept of intranets a step further by opening up part of the intranet to access by trusted outside parties.
Both intranets and extranets rely on Internet protocols and technologies, including HTTP and TCP/IP for transport and HTML for describing documents. In widespread enterprises, they may also cover multiple LANs or spread across an entire WAN. Basically, you can think of intranets and extranets as being "applications" of a sort that overlie a corporate network to give it the "look and feel" of the World Wide Web.
Both intranets and extranets can also provide access to the "real" Internet outside the corporation. In this case, of course, securing the internal network from strangers on the Internet is a considerable concern. Typically, corporations protect their security by using a firewall to separate the internal network from the outside world and by providing access to the Internet through proxy servers that relay requests to and from internal computers and those on the Internet. (Proxies and firewalls are described in more detail in the "Security" section later in this chapter.)
Electronic commerce, or e-commerce, is the latest rage to hit the Web. Droves of businesses both large and small are actively setting up shop on the Web. Most use the Web for retail merchandising, but service-oriented businesses are taking it seriously as well. Online ticketing, for example, is available for both airlines and concerts, travel sites offer electronic hotel reservations and car rentals, and even banks and brokerages are beginning to investigate the Web as a means of doing business.
Many businesses, ranging from Eddie Bauer clothing to Dell Computer and that prodigy of electronic retailing, amazon.com, maintain their own Web sites, of course. But many are also extending their visibility by entering into partnership with the owners of a few large and heavily used sites known as portals.
If you have followed news reports in the past year, you have probably read about portals. In everyday life, a portal is an entryway, or a gateway, to somewhere else—often, an exotic somewhere else. In the Internet world, a portal serves much the same purpose: It is a gateway to the Web, a site designed to provide visitors with all the comforts of a familiar home base. It is meant to be the place to which they go automatically whenever they access the Web, and the place from which they can easily romp off in numerous preselected directions—to bulletin boards, chat rooms, and e-mail services, as well as to sites devoted to sports, news, weather, entertainment, shopping, searching the Web, and so on.
One of the main goals of a portal—in terms of its visitors—is to provide access to certain sites and services, including electronic shopping "stores" and even "malls." Some of the sites and services offered are owned by the organization that maintains the portal, but many are supplied by partners that "rent" space on the portal screen much as merchants rent shops in a mall. Some portals are highly customizable, others are not. All, however, attempt to give their users a sense of order, often categorizing sites by type, to help people search and navigate the bewildering number of variety of sites on the Web. To use marketing-related language, a portal is a means of capturing hearts and "eyeballs," in the process also generating advertising-related revenue for the portal owner.
Portals are not directly related to networking as it has been covered in most of this book, but they are related to e-commerce, which, in turn, will probably become a fact of life for many network administrators. They are, after all, expected to continue growing in popularity, and that popularity will surely have an impact on e-commerce. In addition, portals have attracted the attention and finances of large and well-known corporations, and they are currently being created and maintained by equally well-known Web technology companies, including Microsoft (MSN) and Netscape/AOL (Netcenter), and search providers such as Excite, Yahoo, and Infoseek.
In short, economics and (Web) accessibility are the driving forces behind the development and expansion of the portal concept. And both of these factors influence corporate networks, if not directly then certainly in the way the networks are designed, managed, and secured.
Network security, as mentioned earlier in the book, covers a range of different issues. There is the matter of ensuring uninterrupted power to servers. There is the matter of backing up disks, mirroring or striping valuable data, and otherwise ensuring that information is not lost. On the people front, there is the need for usernames and passwords that restrict network access to authorized users, and there is the need for access control that determines who can view and modify documents, databases, and other files in which company data is stored. In terms of the Web, there are additional issues that a network faces, including:
Protecting the internal network from access by unauthorized individuals
Protecting information as it is transported over the Internet
Protecting the privacy and security of people's personal and financial information
Although proponents of the Internet's current openness are rightfully protective and proud of its accessibility, those qualities of accessibility and openness have both pluses and minuses. To help ensure that private communications and private information remain private without compromising the free spirit of the Internet, technology companies and standards bodies have developed (or are in the process of developing) a number of ways to provide security and peace of mind without unnecessarily hampering the individual.
Note: The following sections do not cover such developments as the Clipper Chip and rating or filtering software designed to control youthful (or employee) access to unacceptable materials on the Internet. These concerns can no doubt be assisted by technological barriers to access, but in the end, parental guidance (for the young) and corporate policy (for the employed) are, or should be, more effective than censorship.
Authentication and encryption
There are various approaches to ensuring that people—and programs—are legitimate, and that communications cannot readily be hijacked by electronic eavesdroppers. Among these are software devices such as digital signatures, which can be used to authenticate the sender of the message; encryption, which is used to scramble transmissions and make them unreadable; and virtual private networks, which use a technique known as tunneling to turn the very public Internet into a secure communications medium. These approaches (most of which could easily merit an entire chapter, if not an entire book) are described briefly in the following sections.
Digital signatures and personal keys If you use Microsoft Windows with Internet Explorer, you have probably seen a certificate-like window appear when you downloaded a piece of software from the Web. This window is the viewable portion of a Microsoft code-verification feature known as Authenticode. Authenticode is a means of assuring end users that the code they are downloading (a) has been created by the group or individual listed on the certificate and (b) that the code has not been changed since it was created. Although Authenticode cannot verify that the program to be downloaded is either bug-free or completely safe to use, it does verify that the code has not been tampered with after it was completed and signed by the creator.
At the heart of Authenticode is a security feature known as a digital signature, a form of cryptography that is used not only to authenticate the creator of a program but also—in circumstances such as transmission of sensitive e-mail—the sender of a small file or message. In order to work, a digital signature relies on a set of two keys, known as a public key and a private key, both of which must be acquired from a valid organization known as a certification authority (the equivalent of a locksmith). In essence, these public and private keys are comparable to the username and password that validate network users at logon. The public key, like the username, is one that can be given out to other people. The private key, like the password, is one that should be known by only one person, its owner.
When these keys are used to sign a program or file, the process involves the calculation of a value known as a hash number, which is based on the size of the file and on other information, including information about the sender. This hash number is then "signed"—turned into an encrypted string of bits—with the sender's private key to create (ideally) a value that can be matched only by the information used to create the hash number. In other words, the hash number fits the original file the way a fingerprint fits only one person, and thus it guarantees that if even a single character is changed, there will be a mismatch between the hash number and the file, and the recipient will be able to tell that the file has been changed or tampered with. On the receiving end, the hash number is recalculated and verified against the signature with the help of the sender's public key (which the recipient must, of course, already have been given).
Encryption Although digital signatures are valuable in authenticating and validating programs and messages, an even higher level of security is provided in the encrypting of important files before transmission—essentially, turning the files into unreadable gibberish to all but the sender and receiver. Encryption can be used either in addition to or instead of a digital signature.
The process of encryption turns a readable message (known to cryptographers as plaintext) into a garbled version (known as ciphertext) for transmission. The coding itself relies on one of several encryption algorithms— roughly, sets of steps or instructions—that are based on the use of either a public key (asymmetric algorithm) or a private key (symmetric algorithm):
When an asymmetric algorithm is used, the key is public and the transmission can be encrypted by anyone who possesses the key. However, the encrypted transmission can be decoded (decrypted) only by someone who has a corresponding private key.
When an encrypted transmission is based on a symmetric algorithm, both the coding and decoding are performed by the same (presumably secret) key or by a decryption key that can be derived from the one used to encrypt the transmission.
The strength of encryption itself is dependent on the number of bits used for the key. This number varies, but certain lengths are currently standard:
40 bits, which would take about 1 trillion attempts to "crack" by brute force (meaning by trying every possible bit combination in sequence). Despite this seemingly daunting number, keys of this length have, in fact, proved to be breakable.
56 bits, which is known as the DES (Data Encryption Standard) and is currently the maximum length key allowed for U.S. export. DES keys are much harder to break than 40-bit keys, but they too have been cracked.
128 bits, which is considered unbreakable by current methods. Keys of this length are available for use within the United States, but software based on 128-bit keys cannot currently be exported to other countries.
Although encryption has been around as long as human beings have wanted to exchange coded messages, it has now entered the information age as a significant factor in Internet transmissions. It has grown beyond romantic spy stories and cereal-box secret decoder rings.
Securing the Internet
In addition to authentication and encryption, which concentrate on securing the message, there are technologies that extend this security to the Internet itself. One of these is an e-mail protocol known as S/MIME (Secure/Multipurpose Internet Mail Extensions). Two others, known as SSL (Secure Sockets Layer) and PCT (Private Communication Technology), are designed to provide privacy through client/server authentication. Yet another is based on a concept known as VPN—virtual private network.
To back up a bit…MIME (Multipurpose Internet Mail Extensions) is a well-known and widely used protocol for Internet e-mail. Developed by the IETF, MIME was designed to allow mail messages to include not only text but also sound, graphics, audio, and video. To do this, MIME uses the message header—the part of an e-mail message that describes the message for transmission—to define the content of the message itself. That is, if the message includes a sound file, the header says so. At the receiving end, software can use the information in the header to call on appropriate programs to display, play, or otherwise handle the different media types. Although MIME standardizes the transmission of multimedia documents, it does not provide much in the way of security. To handle this task, S/MIME was invented. S/MIME, in brief, is MIME with support for digital signatures and encryption.
SSL and PCT
SSL was developed by Netscape Communications; PCT was developed and submitted as an IETF draft by Microsoft. Both SSL and PCT have the same goal in mind: to ensure communications privacy through the use of authentication and encryption as transmissions pass between client and server. Although the two protocols differ in certain respects, both allow application-related protocols such as HTTP, FTP, and Telnet to run on top of them without problem. Both also begin a session with an initial handshake (exchange between client and server) during which the communicating computers agree on an encryption algorithm and key and verify that they are, indeed, the parties they appear to be. Once these formalities are over with, communications between the client and server are encrypted throughout the session.
Virtual private networks—VPNs—are a relatively recent phenomenon, but they are rapidly gaining in popularity as a cost-effective means of using public telecommunications and the Internet to provide secure, private, computer-to-computer communications between LANs and between telecommuters and the corporate network.
In essence, a VPN uses the Internet as a corporation's private communications medium and thus bypasses the cost of supporting leased lines or other dedicated, private networking options. Security on a VPN involves encryption and, usually, a method of transmitting packets over the Internet through a connection known as a tunnel, which forms a private, though temporary, path between the two communicating computers.
Among the protocols that support VPNs is one known as PPTP (Point to Point Tunneling Protocol), which was developed by Microsoft and has gained significant support in the networking industry. An extension of the standard PPP (Point to Point Protocol) used to package datagrams and transmit them over a TCP/IP connection, PPTP encapsulates encrypted PPP packets in secure (and securely addressed) "wrappers" suitable for transmission over the Internet. It also sets up, maintains, and ends the connection forming the tunnel between the communicating computers.
Proxies and firewalls
Secure protocols, encryption, and digital signatures are essential for guarding the data that travels over the Internet, but although they can keep that information away from prying eyes, they cannot do much to physically keep outside intruders away from an internal network. That job is handled nicely by firewalls and, to a lesser extent, by agents known as proxies, or proxy servers.
A network firewall, like the firewall in a house, is a barrier designed to prevent something bad from happening. In the house, the firewall is meant to keep a fire from spreading out of control. On a network, a firewall is used primarily to keep intruders out, although it can, in a reversal of roles, also be used to block outgoing traffic to certain sites from the inside. Typically, a firewall is a barrier set up in a router, bridge, or gateway that sits between the network and the outside world. This firewall stands at the only "doorway" into or out of the network, and like a sentry at a gate, it watches over the traffic, examining each packet to determine whether to discard it (bad packet) or let it through (good packet).
In deciding whether to score a particular packet as a "pass" or as a "fail," a firewall typically relies on a mechanism called a packet filter and on a table that lists acceptable and unacceptable addresses. As packets pass through, the firewall examines source and destination addresses and/or the ports to which the packets are sent. If the address or port on a packet happens to be unacceptable, it blocks the packet. If the address or port is on the OK list, it sends the packet on through. In this way, the firewall can, say, block all packets addressed to a particular destination or those sent to the port associated with a particular service, such as Telnet.
On a higher level, firewalls can also act at the application level where, instead of examining addressing information, they examine the packets themselves, discarding those that fail certain criteria. Such firewalls can be used, for example, to filter e-mail based on content, size, or some other important feature.
Proxy servers, or proxies, are like firewalls in forming a barrier between the internal network and the outside world. In this case, however, the proxies serve the network by standing in for internal computers as they access the Internet. By presenting a single IP address to the world and thus hiding the identities (addresses) of the computers within the network, proxies defend against intrusive practices. One example of such practices is spoofing, in which an invader masquerades as a computer on the network, either to "attack from within" or to intercept information sent to the computer it pretends to be. Like firewalls, proxies can also be used to limit Internet access, for example to prevent employees from visiting undesirable sites or those that are not work-related.
Well, there you have it, an introduction to networking that started with peer-to-peer PC networks and ends with the guardians at the corporate gates. Is there more to know? Of course. Not only is there more to learn about networks in terms of both depth (detail) and breadth (topics not covered here), but networks themselves hardly intend to become static environments that never change.
Because this book began with a quick look back at networking in the past, let's end it with an equally quick look at networking to come. No one, of course, can predict the future, and there is always the possibility that some still-unknown visionary will come up with a new development that will radically change networking, the Internet, or computing in general. Still, here are some network-related developments that are either already here or looming on the not-too-distant horizon.
Network throughput is leaping from megabits per second to gigabits per second. Faster and more reliable hardware continues to evolve, and a myriad of new devices, from pagers to phones and so-called smart cards (chips on a card), are being readied for Internet access, e-mail, and other such network-related uses. New approaches to networking, such as ATM and FDDI, are either approaching or entering the mainstream.
Security, as you've seen, has taken on added—and critical—dimensions as networks have ventured beyond the corporate walls into the telecommunications network and the Internet. E-commerce is encouraging the development of electronic "money" and electronic wallets, as well as driving standards for ensuring financial and personal privacy. And work on the Internet itself is under way to provide the world with much-needed increased speed and bandwidth.
In the corporate world, the Web itself has worked its way into business networking in ways that will continue to blur the distinction between local and remote resources. Web technologies are now in many areas inseparable from networking technologies, and they too have no intention of standing still. Currently, for example, HTML is in the process of giving way to a newer markup language called XML (Extensible Markup Language). XML, while still evolving and not yet fully standardized, is being developed to enable Web site creators to describe not only the look of a Web page, but also the content on that page in a way that will add new levels of flexibility and interactivity to the Web.
And, although they are not strictly network-based, recent developments such as Java and so-called open source software are both finding their way into networks, networking, and the Internet. Java, developed by Sun Microsystems, is a programming language created to allow programs to run, unmodified, on multiple computer platforms. Right now, it is commonly used, as is Microsoft's ActiveX technology, for creating small programs (known as ActiveX objects or as Java applets, depending on the technology used) that can be embedded in Web pages in order to customize them or to make them more interactive and thus more responsive to the user. Open source, which refers to programs whose developers make their program code freely available, is closely tied to a free operating system known as Linux (roughly based on the UNIX operating system) that is widely touted by its admirers as a fast, stable environment for network servers. It also appears that open source software will, in some manner, play a role in the evolution of Java, too.
Such is the past, present, and immediate future of the networking world—an interesting place, as you've hopefully come to see. Although this book parts company with you now, it's to be hoped that it has helped you a little way down the road to understanding networks. From this point on, no matter how fast or how much farther you choose to travel, may your journey be a good one.
About the Author
JoAnne Woodcock is the author of several popular computer books, including Understanding Groupware in the Enterprise, The Ultimate Microsoft Windows 95 Book, The Ultimate MS-DOS Book, and PCs for Beginners, all published by Microsoft Press. She is also a contributor to the Microsoft Press Computer Dictionary.
Copyright © 1999 by Microsoft Corporation
We at Microsoft Corporation hope that the information in this work is valuable to you. Your use of the information contained in this work, however, is at your sole risk. All information in this work is provided "as -is", without any warranty, whether express or implied, of its accuracy, completeness, fitness for a particular purpose, title or non-infringement, and none of the third-party products or information mentioned in the work are authored, recommended, supported or guaranteed by Microsoft Corporation. Microsoft Corporation shall not be liable for any damages you may sustain by using this information, whether direct, indirect, special, incidental or consequential, even if it has been advised of the possibility of such damages. All prices for products mentioned in this document are subject to change without notice. International rights = English only.
International rights = English only.