The Internet and the Web
Chapter 9 from Step Up to Networking, published by Microsoft Press
And now, having traveled the road from small peer-to-peer networks to clients and servers, LANs, extended LANs, and WAN technologies, you come to the biggest, "baddest" network of them all: the Internet, the offspring in large part of Vinton Cerf, who is often called the "father of the Internet" for his work in developing the now ubiquitous TCP/IP networking protocol.
But why discuss the Internet—or more specifically, the World Wide Web—in a book about networking? Well, why not? Even though many people see the Internet as something of an entertainment and shopping medium, that same Internet has fueled the explosion of interest in "Net" technologies ranging from Web browsing, cross-platform software development, intelligent agents, and push-and-pull information retrieval to e-commerce and the very business-oriented development of intranets and (to a lesser extent as yet) of extranets. The Internet has even caught the attention of governments around the world. Although the world has yet to find out whether government interest is a good thing, the question is certainly vigorously debated by technical and nontechnical people alike.
But this digresses.
On This Page
Structure of the Internet
As mentioned early in the book, the Internet (capital I) is an internetwork (lowercase i), a global conglomeration of smaller networks able to communicate and transfer information among themselves. These days—since the early to mid 1990s—it is also the highest flying, most widely publicized aspect of high technology since…well, since what? Perhaps since the Apple II and the IBM PC took center stage and started the process of convincing the entire world that computing was within the grasp of ordinary nontechnical mortals.
To most people, the Internet is something you connect to through a modem and a telephone line or, if you're lucky, through a much faster ISDN or xDSL line or a cable modem. But even though this single connection is all that's needed to access the Internet, the network itself is a complex creation. If you were able to see the structure of the Internet, say from a vantage point in space, you would see it as a global mesh of different networks and network levels, as shown in Figure 9-1:
Regional and Other Networks
Of course the world is a big place, and so geography also plays a large role in determining the structure of the Internet. In the United States, for example, the Internet is made up of a number of regional networks serving the northeast, midwest, west, east, southeast, northwest, and central California. There is also one regional network—appropriately named CERFnet—for international and western United States traffic. To join up with the larger Internet, these regional networks connect to a national backbone through one of four major locations known as network access points or NAPs located near large cities: San Francisco, Washington, D.C., Chicago, and New York.
So now comes the question of how everyone connects to this universal marketplace of ideas and products. The answer: through an Internet Service Provider (ISP) or an online service provider, such as the Microsoft Network (MSN) or America Online (AOL). Both ISPs and online service providers are the vendors, so to speak, that provide a pipeline to the Internet—usually through a connection to a regional network and, through that connection, to the Internet backbone. As a group, these providers are businesses with the equipment and technology needed to provide high-speed access to the Internet over communications lines such as T1. Some of these providers are national or international companies, such as MCI and AOL. Others are small organizations that provide access to individual cities or relatively small geographic regions.
Internet and Web Commonalities
Although the Internet is text-based and the World Wide Web is graphical, the Web is, as most everyone knows, part of the Internet. It just happens to be the popularized part inhabited by small businesses, multinational corporations, Hollywood, television, the news media, and even providers of hardware and software for accessing the Internet. The Web is also the part of the Internet characterized by pretty pictures, sound, video, and animated banner ads that visually scream "click me" as they scroll, bounce, jump, fly, or slither across the screen.
Note: Although the Web is sometimes referred to as an "Internet service," that description seems a little disingenuous in light of its size and the impact it has had on the pre-millennium mid to late 90s. Somehow, the description seems about as appropriate as calling brain surgery a "BAND-AID® service." (No disrespect to BAND-AIDs intended—it's a question of magnitude.)
At any rate, since the Web is actually the collection of hyperlinked documents that forms part of the global Internet, there must be at least a few things the two have in common. And so there are, starting with the way Internet and Web sites are organized and named.
In addition to its physical structure and organization, the Internet is built upon the concept of domains. These domains and the conventions used in creating and managing them were devised as a means for:
maintaining order in what might otherwise become a chaotic virtual world
allowing for orderly, continued growth of the Internet
The domain name system
Somewhat like the names used in classifying plants and animals (for example, mammal/big cat/lion), Internet domains contribute to a classification system—the Domain Name System, or DNS—that uniquely identifies sites based on a tree-like hierarchy that includes a top-level domain, a second-level domain and, often, one or more subdomains.
So what does all this mean to you? Well, first of all, a DNS site name looks like the following:
com represents the top-level domain
microsoft represents the second-level domain—in this case, the name of a rather familiar business
a period (pronounced "dot") separates the top-level and second-level domain names
But that's just the beginning. Because the DNS is based on a treelike hierarchy, domains at one level can be "parents" to multiple domains on the next (lower) level like this:
So, for example, a domain name like the preceding one can be extended further to include multiple subdomains within the site and, possibly, the names of host computers within the subdomains. Using an imaginary example, here are two names representing different subdomains and hosts within a business:
msftcats.com represents the second-level and top-level domain names
bigcat and smallcat represent different subdomains within the site
lion and tabby represent different hosts within the subdomains
Notice, by the way, that even though your eye probably reads domain names from left to right, the name is resolved from right to left, with the highest-level domain at the far right.
Top-level Internet domains
The word domain itself brings to mind such synonyms as kingdom and realm, and Internet domains are, indeed, something like virtual "kingdoms," though not in the sense of being ruled by different leaders, but rather in the sense of being united by something that all members have in common. For the top-level domains, that "something" is either geography or type of organization.
The geographic domains group sites by country, with each country assigned a two-letter abbreviation. Some examples:
.fr for France
.de for Germany
.ca for Canada
.es for Spain
.ar for Argentina
.jp for Japan
.za for South Africa
.us for the United States
The domains that group sites by type use the three-letter abbreviations so familiar to Internet users:
.com for commercial sites
.org for noncommercial organizations
.gov for the U.S. government
.net for network-related groups
.edu for educational institutions
.mil for the U.S. military
.int for international organizations
And within all these domains is the vast array of Internet sites—comparable to stalls in the world's biggest bazaar—that are created, owned, and maintained by various individuals and organizations wanting to announce themselves, their products, their interests, and even their philosophies to anyone who cares to seek out and visit them. (Some sites are selective enough to require visitors to subscribe and possibly pay for their services, but most are free.)
DNS Databases and IP Addresses
The preceding sections have described the DNS in terms of naming Internet sites, because those names are the ones you see in ads ("come visit our Web site at www.microsoft.com"), and they are the ones you type in the address bar of your Web browser whenever you want to visit a particular site.
It's very important to note, however, that DNS also refers to databases distributed among a number of DNS name servers. To understand why this is important, start by thinking about how computers contact each other on the Internet. When you type www.microsoft.com to visit the Microsoft Web site, does your computer send out that particular stream of characters and hope that another computer named www.microsoft.com answers? Not at all.
What your computer does is use the microsoft.com site's IP (Internet Protocol) address in communicating. This address is totally numeric, although it does use dots just like the friendly text-based DNS names do, and it looks something like this: 184.108.40.206. This address is very strictly defined, byte-by-byte, and is technically known as dotted octet format. It is also the only "name" computers use to recognize one another on the Internet.
However, not many people would be able to remember such strings of numbers, crucial though they are, and so DNS and its databases come to the rescue. Through its databases, DNS matches friendly names to IP addresses, rather like a phone book matches people names to telephone numbers. In so doing, the DNS databases ensure that humans can use human-friendly words and computers can use computer-friendly numbers, while also guaranteeing that the site you request and the site your computer contacts are always one and the same.
As you can see, the job performed by the DNS name servers is critical, since they, and they alone, can turn a typed address into its corresponding numeric IP address. However…DNS name servers are not monolithic machines that sit somewhere "out there," surveying the entire Internet, matching clients (visiting computers) and servers (Internet sites) like some digital dating service.
The Internet hierarchy, remember, is composed of different domain levels. DNS name servers do their matchmaking at different levels, too, and the DNS name server at any given level is considered the name/address authority for that level.
At the very highest level, for example, the one that corresponds to the top-level domains (com, org, net, and so on), the DNS authority for top-level domain names is known as a root server. This server contains the information needed to locate the server for, say, microsoft.com. But that's all it does. The root server does not concern itself with any lower levels—subdomains—within microsoft.com. That job belongs to lower-level servers that contain information about subdomains, sub-subdomains, and so on. In a sense, you can think of the process of passing authority from one level down to the next as a computer equivalent of trickle-down economics.
So, given the existence of top-level domains, subdomains, DNS name servers, and address resolution from site name to IP address, how exactly does computer A go about contacting site B (say, bigcat.msftcats.com), when both are total strangers? This is what happens:
First, computer A contacts its local DNS name server, saying "I want to get in touch with site B."
If the local name server already knows how to do this, it responds by sending A the address for site B. If the local name server does not know the address, it sends the request on to a root server. How does it know what root server to contact? That information is already part of its "knowledge base."
The root server, although it cannot resolve the entire address for site B, does give the local name server the information it needs to contact msftcats.com.
Armed with this information, the local name server then contacts msftcats.com, which is then able to provide the complete address for bigcat.msftcats.com.
This whole process of name resolution is known as an iterative query because the request is sent repeatedly (iteratively) through name servers in the domain hierarchy until the complete address—or a "no such site exists" response—is produced.
Although the process sounds tedious and time-consuming, there are two ways in which the DNS system speeds things along. First, the top-level database is replicated (reproduced) on many root servers located throughout the world, so the task of resolving addresses at the highest level does not need to be handled by one or two overworked machines. Second, servers at all levels cache the addresses they've already resolved. When a request for a particular location arrives, these servers consult their cache first. So, if they find the needed address, they can send an immediate response to the requesting computer. Only if they cannot find the address in their cache do they then have to go through the iterative string of queries needed to resolve the name.
NSI, IANA, ICANN, and the Future
As you've probably guessed, the Domain Name System involves some heavy-duty tracking and record keeping to ensure that site names are not duplicated, that site names and IP addresses are correctly maintained, and that new sites and their corresponding IP addresses are added to the appropriate databases. So who does all this?
Within an organization, it's the responsibility of the organization itself to keep track of its subdomains, hosts, and various subgroups (called zones) that are defined by either administrative or authoritative responsibilities.
For the top-level domains, the responsibility for registering names for com, net, and org has until recently been in the hands of a for-profit organ-ization called Network Solutions, Inc. (NSI). The responsibility for assigning IP addresses has been handled by a group known as IANA (Internet Assigned Numbers Authority) located at the University of Southern California. Both groups operated under exclusive contracts with the United States government.
In 1998, however, the U.S. Department of Commerce, after considerable review and sometimes contentious debate, approved an agreement to privatize these responsibilities and hand them over to a new corporation known as ICANN (Internet Corporation for Assigned Names and Numbers). As of the beginning of 1999, ICANN is in the process of developing its organization under the supervision of a 19-member board of directors.
Organizations and Standards Groups
In addition to IANA (and now ICANN), there are numerous organizations involved in various aspects of the Internet. Some are highly technical organizations concerned with developing and maintaining Internet standards. Others are concerned with Internet-related issues such as security, privacy, and—in the case of the Electronic Frontier Foundation (EFF)—the preservation of civil liberties and free speech for Internet users. The following list briefly describes some globally recognized and technically oriented organizations frequently mentioned in terms of the Internet and its continued growth and evolution:
The Internet Society, or ISOC, is a nonprofit membership organization based in Reston, Virginia, with members from around the world. Its focus is, as described on its home page (www.isoc.org), on "standards, education, and policy issues" that affect the Internet.
The Internet Architecture Board, or IAB, is an ISOC technical advisory group involved in several aspects of the Internet. As part of its function, it advises the Internet Society on technical and other matters. This group also oversees the architecture for Internet protocols and procedures, supervises standards processes, and represents the Internet Society in dealing with other organizations concerned with Internet standards and other issues.
The Internet Engineering Task Force, or IETF, is an organization of individuals interested in the evolution and operation of the Internet. The work of the IETF is handled by various working groups that deal with issues such as routing, operations and management, user services, and security. The IETF is open to new members worldwide and is overseen by the Internet Engineering Steering Group (IESG), a supervisory arm of the Internet Society.
The Internet Research Task Force, or IRTF, is a group of volunteers, complementary to the IETF, that focuses on long-term projects related to the Internet—for example, the issue of privacy as it relates to e-mail. The IRTF is supervised by the Internet Research Steering Group (IRSG) and, like the IETF, is part of the Internet Society.
The World Wide Web Consortium, or W3C, is (as the name clearly indicates) concerned solely with the World Wide Web—specifically, with standards and protocols designed to promote both the growth and interoperability of the Web. The W3C is often in the technical news as the standards body to which developing technologies are submitted for consideration and possible approval. The organization is young—dating only from 1994—but it boasts an international mem-bership and an influential voice in Web development and standards.
Well, that's the "scoop," more or less, on the Internet as a whole and the way it is organized and maintained. Now, move in for a closer look at how it operates and (the fun part) what it offers.
Despite its mammoth size and scope and the speed of the backbones, routers, and other elements of its infrastructure, the Internet is at heart—at least for now—a dial-up network in that end users don't normally connect through ISDN lines, cable modems, or other high-speed technologies. They use plain old telephone service.
Where the Internet is concerned, however, this voice-based telephone service is required to carry serial transmissions and enable computer-to-computer connections. And the computers themselves must be able to establish and terminate sessions as well as agree on framing, error control, and other communications niceties that occur at the data link layer. The solution used during most sessions happens to be one of three IP protocols, known as PPP, SLIP, and CSLIP (a compressed version of SLIP).
Of the three protocols used by most ISPs, PPP, the Point–to-Point Protocol, is the newest and fastest. It is also an Internet standard. A flexible means of enabling computers to communicate, PPP supports multiple protocols, including the Internet's TCP/IP (of course), as well as IPX, AppleTalk, and others. PPP is based upon two main elements:
A Link Control Protocol (LCP), which is used to set up, test, negotiate, and end a computer-to-computer link
A Network Control Protocol (NCP), which is used to negotiate and establish the details related to the protocols to be used during the transmission
Essentially, this is how the pieces fit together in a PPP connection to the Internet:
First, a PC uses its modem to call the user's ISP.
Then, LCP comes into play. During this phase of the connection, LCP establishes a link with the ISP's equipment, tests the link, and negotiates options, such as frame type and packet size, to be used for communication.
Next, the NCP is used to configure protocol-specific characteristics for use during the session. For example, it is at this point in many connections that the calling computer, through the NCP, is dynamically assigned a temporary IP address, which is needed in order for the computer to use the TCP/IP protocol stack. (As a side note, because PPP supports the dynamic allocation of IP addresses, matters are greatly simplified for the end user, who would otherwise have to provide a valid address himself or herself—not a very friendly option for nontechnical individuals.)
Now that the groundwork is complete, data transmission begins.
When it's time to end the session, NCP swings into action again to dismantle the network layer connection.
And finally, the LCP takes responsibility for terminating the connection gracefully.
In addition to all this, PPP supports two methods of authenticating users, PAP (Password Authentication Protocol) and CHAP (Challenge-Handshake Authentication Protocol). Both provide a measure of security in that the communicating computers can verify that they are, indeed, who they say they are.
SLIP is short for Serial Line Internet Protocol. A simpler, older means of enabling computers to communicate over a serial transmission line, SLIP has been widely used for Internet connections. Unlike PPP, SLIP supports TCP/IP only, but that is not a particular disadvantage, since the Internet itself is based on TCP/IP. SLIP, however, does suffer from some restrictions that make it less desirable than PPP for Internet connections:
SLIP does not include any means of error detection or correction.
SLIP does not support dynamic allocation of IP addresses, so the caller must know and be able to provide not only his or her own IP address, but the one assigned to the remote computer as well. If an ISP assigns IP addresses dynamically, the caller's software must be able to "catch" and use the assigned address, or else the caller must provide that information manually.
SLIP is not an Internet standard and so exists in a number of different, incompatible versions.
SLIP does not authenticate users, so there is no means of verifying the identities of the calling and called computers.
Despite these disadvantages, however, SLIP is still in wide use. That situation is likely to change over time, as support for PPP continues to grow.
CSLIP, as the initials in its name indicate, is a variation of SLIP. The C stands for Compressed, so CSLIP's complete name works out as Compressed Serial Line Internet Protocol.
Like SLIP, CSLIP is designed for serial traffic and supports TCP/IP. Where it differs from SLIP is in the packet header, which is compressed from the 24 bytes used in SLIP packets to a mere 5 bytes in CSLIP. In order to compress the packet headers, CSLIP takes advantage of the fact that certain header fields are repeated in packet after packet. CSLIP eliminates the fields that succeeding packets have in common with those that have already been transmitted, and thus it includes only those that differ in each packet. Although shrinking the header without also shrinking the data portion of a packet would not seem to be that much of an advantage, in actuality it does optimize SLIP, especially when long documents are transmitted.
Internet and Web Protocols and Services
There's a lot more to learn about the Internet, as you would imagine—not only more technologies, but more details about the technologies described here: bits and bytes per header field, frame construction, communication signals ("yes, I got it," "garbled—please resend," and so on), and many details about how software manages to allow remote connections, automatic logons, password verification, and on and on. And this recitation doesn't even begin to take into account the technologies currently under development, those awaiting approval from some standards body, or—significantly—the dreams firing the imaginations of the current and next generation of Internet pioneers who will take the world to the high-speed Internet2 and beyond.
Until these visionaries perform their miracles, there is still the current global network. Slow though it can sometimes be, especially where the Web is concerned, it is still a technological marvel. After all, how else can anyone, anywhere, access such a wealth of information so inexpensively (typically about $20 per month) through a simple telephone call?
And that brings you to the fun part of this chapter, a survey of some of the services and protocols that make the Internet and the Web what they are today.
Search Engines and Services
Given the massive number of sites and the even more massive number of documents available on the Internet, a beginner's first question is likely to be, "How do I find what I'm looking for?" Well, obviously, there are several ways:
Seeing the address of an Internet (usually a Web) site on television, in a newspaper, on a business card, and so on
Being told where to go by someone else ("Hey, you've gotta go check out the www.amazon.com Web site!")
Finding it yourself
And that last approach brings you to search engines and information services, of which there are many.
Web search engines
Search engines generally relate to Web searches. They are, in fact, a fact of life on the Web. With them, you can find sites related to just about any conceivable topic. Without them, finding sites on the Web would be comparable to exploring the rain forests of the Amazon with blinders on. They are, in other words, necessary. And there are many well-known ones to choose from:
MSN Web Search
Although some search engines, such as AltaVista, simply return lists of sites, others, such as Yahoo and Infoseek, categorize their search results for ease of use and may even rate the results for you. Despite such differences, however, all search engines have a few features in common: They use keywords to index documents, and they rely on databases of stored information to retrieve Web sites relevant to a search.
Most search engines also provide the user with the means of performing either a simple search based on one or more keywords or a more elaborate search that allows the use of logical (Boolean) operators, such as AND, OR, and NOT.
In developing their stores of keywords and Web sites, some search engines rely on human indexers, some rely on existing indexes, and some rely on fascinating software tools—often called spiders or robots—that literally roam the Web to find and bring "home" new lists of sites and documents.
Although search engines and the Web make finding information a snap, there are also other ways—older, Internet-based ways—to find and retrieve information. One of the most widely used is the File Transfer Protocol, or FTP, which makes downloading both text and binary files extremely fast and easy. FTP is assisted by a service named Archie that helps in searches. Another frequently used search service is named Gopher. Like the less entertainingly named FTP, Gopher also has a search assistant—actually, two of them, named Veronica and Jughead. In addition, information seekers can turn to a UNIX-based search service named WAIS (for Wide Area Information Service).
Brief descriptions of these services follow.
FTP and Archie
FTP is a longtime staple of the Internet. A protocol in the TCP/IP suite, FTP runs at the application layer and provides access to huge numbers of files that have been publicly posted and made available for downloading. These files are maintained on numerous FTP servers, which people access as "anonymous" or guest users after providing their e-mail address as a password. To find information stored on FTP servers, users can turn to Archie, a search service that can help locate files either by name or through a descriptive keyword.
Gopher, Jughead, and Veronica
Gopher takes its name from the slang term "go-fer," for someone (or something) required to run back and forth, fetching this and that for someone else. Unlike FTP, which accesses different documents stored on different servers, Gopher links information servers through indexes into a single, searchable "place" known as Gopherspace.
Within Gopherspace, documents and other information are organized hierarchically, and visitors use a menu-driven system to work through increasingly specific levels until they reach the information they seek. At that point, they can also rely on Gopher to deliver the information to their computers.
To help with searches in Gopherspace, users can rely on one of two search services: Veronica and Jughead:
Veronica, the Gopher counterpart of Archie, searches the menus on all Gopher servers for information that matches the user's search. To help in narrowing a search, Veronica allows for the use of substrings and Boolean operators. Although the name Veronica is generally understood to be a reference to a friend of the comic-strip character Archie, the name is also considered an acronym for the rather convoluted Very Easy Rodent-Oriented Netwide Index to Computerized Archives. (Someone had to work hard on that one.)
Jughead is a service provided by special Jughead servers that indexes the highest-level Gopher menus by keyword. With Jughead, a user can limit a search to specific Gopher servers rather than all of Gopherspace. Like the name Veronica, the name is a two-edged reference. Here, it refers to a friend of both Archie and Veronica, and it is also an acronym for Jonzy's Universal Gopher Hierarchy Excavation and Display.
Although businesses and corporations implement their own e-mail services within their own networks, everyone who uses the Internet knows that e-mail is not limited to such private services. One of the most popular uses of the Internet is, in fact, e-mail. This global message service is available through numerous software applications, all of which enable Internet users to communicate with family, friends, strangers, and even Internet sites—with anyone, in fact, who can be addressed by the familiar:
username is the recipient's e-mail name (sometimes all or part of the person's real name, other times an identifying "handle")
@ is the "at" sign, which is always included in the address
location is the place—the electronic post office—where the recipient's mail is delivered and stored
Internet mail transport and delivery standards are supported by the Simple Mail Transfer Protocol, or SMTP, which runs at the application layer. SMTP is part of the TCP/IP protocol suite and provides, as the name suggests, a simple e-mail service.
News on the Internet has two different meanings. First, there is news of the sort defined by television anchors, newspapers, and various special-focus magazines and journals. This type of news is widely available on the Web. Some of it is subscription-based, but much news is provided free—for example:
MSNBC News on MSN
CNN and CNN Finance
The New York Times
And then there is the news that ordinary people like to exchange through online posts to discussion groups and real-time chats. This type of information most likely won't make the ten o'clock news programs, but it's usually of great interest to the people involved (and to those who lurk in the background, reading but not contributing to the general discussion).
This type of news is handled on the Internet by services based on the Network News Transfer Protocol (NNTP), a de facto standard that is used to distribute collections of articles called newsfeeds to a bewildering array of interest-based newsgroups.
Since this is a book about networks, take a quick look at NNTP itself before going on to the news services and how they operate.
NNTP is a reliable, fast protocol that provides for downloads, just like FTP, but it also offers much more in terms of interactivity and selectivity. On the interactive front, NNTP supports communication between two news servers and also between clients and servers. Because of this interactivity, NNTP enables clients to download newsfeeds and newsgroups selectively, omitting those that are of no interest. In addition, NNTP supports the ability to query servers and to post news articles.
One of the most popular, widely used, and well known news services implementing NNTP is known as USENET. USENET is a huge, 24-hour-a-day, every-day-of-the-year service that includes bulletin boards and chat rooms in addition to supporting thousands of newsgroups dedicated to topics of all sorts.
In order to access USENET, users subscribe (that's the term, though there's no charge) to the service, download a viewing program called a newsreader, and then subscribe to the newsgroups whose contents interest them. Once subscribed, users can then download some or all articles from a newsfeed and, if they choose, join in the fray by posting their own opinions or their responses to other opinions expressed in a particular thread (series of posts on the same topic).
Telnet is a TCP/IP protocol that runs on the application layer and exists for one purpose: to allow a computer to log in to a remote computer and pretend it is a terminal attached directly to the host. The remote computer can be anywhere, thanks to the Internet's geographic scope. As long as the connecting computer is provided with terminal emulation capabilities (available, for example, in Windows NT and Windows 95/98), it can use the resources and programs installed on the remote machine.
MUDs, Chats, and Other Forms of Play
And, finally, what about some of the really fun things people get involved in on the Internet? There are some, though to a great extent, your definition of "fun" determines which services interest you most. Some newsgroups, for example, are fun—or at least entertaining. Others are serious, a fair number are highly technical, and sadly, some are downright disgusting.
However, for those with a more traditional idea of fun, two resources stand out as places to go when time allows: MUDs for those who like games, and chat rooms for those who prefer real-time conversations.
MUDs, or multiuser dungeons, are an outgrowth of the popular dungeons-and-dragons type of interactive, multiplayer role-playing games (RPGs). On the Internet, a MUD provides participants with a virtual game environment where each can play the part of a different character and all can interact in real time. A MUD is sometimes referred to instead as a MUSE (multiuser simulation environment). Along the same lines, for those who prefer high-tech to fantasy, there are similar real-time environments known as MOOs (MUD, object oriented) where individuals can, again, interact but tend to concentrate on matters of the mind—programming, for example.
And what about chats? Many people, from children to senior citizens, relish these services. Available both on the Internet and on the World Wide Web, chats provide participants with a means of carrying on real-time conversations. On the Internet, chats are supported not only by news services, such as USENET, but also by other services that allow two or more people to chat in real time. One such service, known as Talk, allows two people to connect and carry on a conversation. Another, known as IRC (Internet Relay Chat) allows multiple participants to chat with one another. IRC generally dedicates channels to different topics and broadcasts comments to the entire group.
And that pretty much is that, as far as overviews of the Internet and the Web in general are concerned. The next stop—and the last chapter in this book—moves on to look specifically at the Web.
About the Author
JoAnne Woodcock is the author of several popular computer books, including Understanding Groupware in the Enterprise, The Ultimate Microsoft Windows 95 Book, The Ultimate MS-DOS Book, and PCs for Beginners, all published by Microsoft Press. She is also a contributor to the Microsoft Press Computer Dictionary.
Copyright © 1999 by Microsoft Corporation
We at Microsoft Corporation hope that the information in this work is valuable to you. Your use of the information contained in this work, however, is at your sole risk. All information in this work is provided "as -is", without any warranty, whether express or implied, of its accuracy, completeness, fitness for a particular purpose, title or non-infringement, and none of the third-party products or information mentioned in the work are authored, recommended, supported or guaranteed by Microsoft Corporation. Microsoft Corporation shall not be liable for any damages you may sustain by using this information, whether direct, indirect, special, incidental or consequential, even if it has been advised of the possibility of such damages. All prices for products mentioned in this document are subject to change without notice. International rights = English only.
International rights = English only.