Chapter 1 - Delivering an Enterprise

Archived content. No warranty is made as to technical accuracy. Content may contain URLs that were valid when originally published, but now link to sites or pages that no longer exist.

From the book Enterprise Application Architecture With VB, ASP and MTS by Joseph Moniz. (ISBN: 1861002580). Copyright ©1999 by Wrox Press, Inc. Reprinted by permission from the publisher.

For more information, go to https://www.wrox.com.

If you are anything like me, you're probably sick and tired of hearing the UNIX and Mainframe pros rattling off the reasons why our PC based systems can't handle the tough jobs their overpriced boxes perform. For quite a while now, we have had to bide our time, bite our tongues, and bear with their bloated senses of superiority. That time has passed. Our venerable PC is all grown up. It has matured into a powerful creature, more than capable of standing toe-to-toe with any of its alternatives. But be warned. This newfound prowess is really a double-edged sword. The fact that our equipment is up to the task doesn't mean that we are completely ready to step into the ring with the big boys yet. We still have a lot of work to do. This book is a step-by-step guide designed to give you all the tools you need to create an Enterprise Caliber System for your organization, no matter what size it is.

On This Page

Using this Book
An Enterprise Caliber System
Book Overview
Summary

Using this Book

I want to avoid any confusion or misconceptions, so I want to make a couple of things clear at the start. First, the main focus of this book is the design, development, and deployment of world-class applications across a distributed architecture. What that really means, is that we can use as many machines as we need in order to accomplish a given task. So if we were collecting and managing information from every cash register across the globe for a major retail company, then we might need 80 or 100 machines to handle that task. But, if our task was to design and develop software, then we can accomplish that task with a single machine.

In other words, you can model an entire server farm and run all of the code in this book (or other code you develop using this technique) on a single Pentium class machine.

Then you can take that code, and distribute it across 2, 3, 4 or even 100 or more machines if that is what the application calls for.

You will not need to change a single line of code to re-deploy your application.

The reason I have emphasized the multi-machine platform in this book is so that I could give you the concepts and specific instructions you need to deploy and use your application across multi-machine platforms. Most of the books I have read go to great lengths to say that this can be done, but then they fail to give you a clue about how to actually do it. This book is intended as a practical guide for delivering an enterprise. So, it contains the real-world instructions you are likely to need.

Second, you might notice that there is a lot of code in this book. Please don't let the volume of code cause you to believe that the coding techniques in this book are difficult to accomplish. My teams use the code in this book every day to get real work accomplished quickly and efficiently. In other words, sometimes I know that the actual code itself is really more valuable than anything I might have to say about the code. I think a good programming book should be, kind of, like having a smart friend sitting next to you – one that you can count on. In other words, what you will learn in this book is not some 'pie-in-the-sky' theory of how things should be done. It's a practical guide that explains exactly how to do it.

I have presented the material in an order that I believe will give you the tools you need to be able to "read" the code more and more easily as the chapters progress. I have also provided several tools that will actually do the job of writing some of the more nasty sections of code for you. I think about it like this. If I can handle some of the coding chores for you then I have sort of paid a price for your time. I hope that price gives you the freedom you need to stop, take some time, and really consider some of the interesting ideas in this book without any unnecessary aggravation.

An Enterprise Caliber System

Distributed Architecture, ActiveX, Java Beans, OLE, DNA, ASP, XYZ, 123, blah blah blah. Right about now you are probably wondering if it will ever end. Don't worry so is everyone else. Every once in a while, just when I think I've got it all figured out, someone goes and throws another new twist at me, and I to begin to wonder if it is all worth it. How can I ever hope to build a reliable, stable system if someone keeps changing the rules? I will tell you how, take a hint from the classics. Even in these turbulent technical times, there are some timeless design principles that we can rely upon to guide us through the confusion. They are the same principles that good programmers and engineers have been using for years.

The designs that follow have been carefully crafted to allow us to construct a system that is powerful enough to meet the needs of today's enterprise while being flexible enough to allow an enterprise to grow and meet any future needs or programming challenges.

The cornerstone of this design is that the entire system is constructed in a modular fashion, which divides the processing into atomic units that can be tuned with pinpoint accuracy. This same atomicity allows the system to be flexible enough to handle anything else Uncle Bill at Microsoft may be planning to throw at us.

The Physical System

It is not possible to buy the perfect piece or pieces of hardware that will magically give an organization an Enterprise Caliber system. This is a hard notion for many companies to swallow. In the past, it may have been a perfectly reasonable approach to buy the biggest and best mainframe (or midrange) available and depreciate the beast over the next X number of years. If a company needed more computing power, they just bought a bigger box – problem solved. Today, while I imagine it's still possible to just go out and buy the biggest and best mainframe, most companies have come to the conclusion that this monolithic approach is probably not the best way to solve an organization's long term information management problems.

I am not going to go into the cost benefit analysis between mainframe hardware and an enterprise hardware approach to a given problem. If you are reading this book, chances are that, either you or someone in your organization has looked at the numbers and come to the conclusion that the enterprise solution is the less expensive option for some of its computing needs.

Mainframes

What I do want to talk about is the problems that occur when you try to apply the mainframe (or buy a bigger box mentality) to the enterprise design problem. This image is supposed to represent a big mainframe connected to a large number of client workstations. When most people envision a physical information system this image, or one rather like it, is probably what they conjure up:

Cc750267.image183(en-us,TechNet.10).gif

So, it is not surprising that when companies first started creating enterprise level client-server solutions that they first envisioned the client-server model as something of a scaled down mainframe. It seemed like a perfectly reasonable solution to buy a single, powerful, server and connect a bunch of clients to it. The problem with this approach is that a mainframe is really very much more than just a big box. It is really a finely tuned orchestration of processes. The fact that these processes are executed, more or less, in the same box made the management of these different processes an expected, understood, and vital responsibility of the system's keepers.

Enterprise Servers - The Wrong Way

When companies took their first run at designing an enterprise using servers in place of the mainframe, the structured environment that the mainframe had provided was missing. When the performance of these systems turned out to be less than anticipated everyone naturally assumed that this was due to the new server's physical lack of processing power:

image184

Of course the real problem was not the less powerful machine as much as it was the system's lack of structure. This was due to the fact that the overall system management functions, that the mainframe world takes for granted, were often not identified, misunderstood, or just plain overlooked. In other words, in these server-based systems, there were few if any reins placed upon the clients. There was nothing stopping any, or all of the clients from initiating 1,000,000 row queries against a database simultaneously. You can imagine the results – the machines choked and died. Of course, everyone attributed the server's inability to handle the load to its most noticeable feature – its diminutive CPU. In other words, most people failed to realize that the real reason mainframes can handle so many clients is because mainframe systems don't let their clients run amok in the system.

Anyway it seems that the industry as a whole concluded that, because of their lack of processing power, server-based systems could only handle a small number of users. What followed was an understandable, but incorrect impression of how to create an Enterprise Caliber system*.*

PC based systems are not the only ones that have fallen to this misconception; UNIX servers are also deployed in nearly the same fashion in most of the places where I have worked.

Take a look at the next image and see if it looks familiar:

Cc750267.image185(en-us,TechNet.10).gif

I bet your organization's infrastructure is set up something like this. Instead of having a single centralized resource, you probably have quite a few databases or servers spread throughout your organization. If you ask just about anyone why this is so, they will almost certainly answer that the smaller systems (UNIX and NT based) can only manage X number of users.

They might also argue departmental necessity or security issues, but that is really a symptom of the underlying problem.

It is entirely possible to divide and secure data on a single server that can be used by multiple departments.

If you work for a large company, then you know the other discouraging facet of this design. This facet is perhaps the most expensive one – the myriad of "data bridges" most companies employ in an heroic but nearly futile attempt to make some sense from all of the different pockets of information they have collected. Unfortunately, when most people think about distributed computing, they just take it for granted that the system must be designed as shown above – several separate lines of communication between different user groups and their associated data stores. This is so wrong! This design is nothing more than the illegitimate child of a reasonable but incorrect delusion the industry adopted some years ago.

Distributed architecture does not have to mean distributed data.

Enterprise Servers - The Correct Way

Let me make my major point clear. Although enterprise level servers are not as powerful as mainframes, the reason that mainframes seem to be able to handle so many more clients is because, inherently, the mainframe system actively manages the clients' access to its available resources. At some point, this management is more important to the overall system performance than the amount of raw processing power. Think about your friends or acquaintances for a minute; I bet you know two people that earn about the same amount of money, but one of them seems to be more financially secure than the other one. For both individuals the available resource, money, is limited and essentially the same. How do you explain the difference? Like every other engineering problem, it always boils down to one thing – the management (or better, the optimization) of a limited resource.

What this means is that while it doesn't hurt to have more powerful machines, what we really need to do at the enterprise level is to hire a smarter machine. We need a machine (read machine to include the operating system) that can monitor the demands on the system and respond accordingly. In other words, we do need to do what the mainframes do. But fortunately for us, we don't need to do it the same way. While the mainframe world has essentially one option "I think I need a bigger box", we have a nearly infinite number of disposal.

If we do it right, we can always add processing power by exercising our option of adding another box. This simple concept is the essence of distributed architecture.

The real trick here is learning how to integrate the additional resources into the system in a manner that will enhance the system as a whole. This problem cannot be solved with hardware alone. It requires the same finely tuned orchestration of processes that the mainframe world applies to the problem. In real terms, this means that when an organization purchases or deploys hardware it has to do more than the typical polite interchange that too often occurs between the hardware and development teams.

Developers really need to understand what hardware options are available and just as importantly, the hardware team really needs to understand the nature of distributed architecture from the system architect's perspective. The good news is that even if your organization has made every mistake in the book, this problem is really more about the deployment and management of physical resources rather than a hardware specific issue. In other words, as long as the hardware/network team has purchased basically sound equipment, it can always be re-organized into a more efficient configuration that can grow to meet the needs of any number of clients.

The Software Component System

A good part of this book is dedicated to designing, developing, and deploying Data objects. No, I don't mean business objects; I mean Data objects. Business objects are responsible for integrating an organization's skill and talent with the organization's data, while Data objects are concerned with managing an organization's data. There is a difference.

Data Objects

Think of an organization's data as raw material, maybe something like a pile of wood. If you give a carpenter a pile of wood, depending upon his skill and talent, he might build you a chair, a table, or maybe even a house. Organizations do essentially the same thing with information. If I give a top-notch sales organization a list of names and addresses, they will apply their organization's skill and talent to that list and build customers. If I give the same list to a top-notch Information Technology Recruitment Company, that organization would apply their organization's skill and talent to the list and find products – new employees for their customers. In either case, the raw material is the same, a list of names and addresses.

What that means is that at some point, we can treat the data for both organizations exactly the same way.

Right about now, you are probably thinking that although the core of data is the same in both cases, we will actually need different information to satisfy each organization's needs. While the sales organization might be interested in information like purchasing power and customer tastes, the recruitment organization is probably more interested in things like years of experience and salary requirements.

It is possible to build a single Data object that can handle both organizations' data management needs, out of the box, without knowing anything at all about either company's skill and talent (their business rules). From the above requirements, it sounds like the organizations both need to manage information about people, so we would probably design a Data object that we might call a Person object.

The Property Bag

In order to make this Data object flexible enough to handle both organizations' data management tasks, we would divide the object's properties into two sets – a base set and an extended set. In this case, our task is to keep track of information about persons, so we might make the base set of properties something like ID, Last Name, First Name, Middle Name, Birth Date, etc.

Notice that I have chosen things that don't need to change whether the person is an engineer, a customer, or an employee etc.

Then in addition to the base set of properties, we give each person object a set of extended properties. These extended properties are not pre-defined and can be configured as needed:

Cc750267.image186(en-us,TechNet.10).gif

To keep the extended properties organized, we give each Person object an additional base property – something we will call a Property Bag. You can imagine that this Property Bag allows each organization to configure the Data object to carry whatever information it needs in order to apply its business rules most effectively. We'll being going into the details of this later in the book, but for now you might like to know that all of this can happen without changing a single table, stored procedure, or line of code anywhere in the system. That means that we can build 100% reusable Data objects.

Using Data Objects

We can create one Person Data object that can be cast as any type of person. In other words, if we need to track employees, all we have to do to create an Employee object from a Person Data object is to define a new Property Bag for the Person object that contains employee specific information. And, if the problem was to track customers, then we can use exactly the same Person object (yes and even the same base set of data) to add a Customer Property Bag that contains the information we want to track for customers. Notice that I didn't call either of these extended objects a business object. I called them Data objects. I guess you could argue that the selection of what information to place in the Property Bag is a business rule and that by defining the Property Bag we were executing a business rule. I would have to agree, but that just means that the Data object is flexible enough to be able to handle any set of business rules. Changing the set of properties contained in the Property Bag doesn't change a line of code in the object. The Data object is designed to simply manage data so it doesn't need to contain a single business rule in its code.

This distinction is important because we cannot build 100% reusable business objects. Remember that business rules are akin to an organization's skill and talent. While two different carpenters might both build a chair from the same pile of wood, the chairs would be as different as the carpenters.

One chair might be crooked and worthless while the second chair becomes a priceless piece of art. It is important to note that in both cases, we can use exactly the same technique to plant the tree, cut the tree down, and mill the tree. We can even use the same truck to deliver the wood from the tree to the carpenters' woodpile.

In other words, a good part of the work that we do when we deliver an application is the same for every application we build. The act of storing or retrieving the data can be handled in the same manner no matter what application happens to be using that data. That means that Data objects are completely reusable. Just like the carpenters and the wood, two different applications can use the same raw materials – the Data objects – even though the application of the business rules will make the final results very much different. We can think of Data objects as raw material for applications, and this means that it is a worthwhile endeavor to build high-quality Data objects.

The following image is intended to depict the relationship between Data objects and business object. As you can see, the Data object is an integral part of every business object:

Cc750267.image187(en-us,TechNet.10).gif

A good part of this book contains step-by-step instructions for designing and building Enterprise Caliber Data Objects. These objects' sole responsibility is to manage data. They do it in a fashion that may seem foreign to your experience. For instance, every object has the ability to maintain a complete history about the data it manages. In other words, if an application used a Person object and a user changed the

FirstName

property of a particular Data object from FakePre-76a0a339381448d5b413090485f312d6-5872c9a2303a4e0597d58729cb94bab4 to FakePre-3201965edd2e49f49651906d7fd1959c-5c095ae0209b45ab88bf22f81a41c9e4, the object would make the change, record who did it, when they made the change, and keep track of the previous value. The object allows every user in the enterprise to view a list of previous values for any property. The object also gives the user the option of using the information in that list to undo any change that was made to an object. We call this history of changes to the object data over time an audit trail. The object's audit trail is also used for other things. Maybe instead of changing the FakePre-c8305c8b072244f6888894bac30385de-db79c145331e4ceba55984fb51704d28 property, the user accidentally deleted the entire object from the data store. The Data objects we will build in this book also have the capability to retrieve a list of currently deleted objects and restore them to their state at the time they were deleted. This may seem like a lot of functionality for a simple Data object, but remember that the Data object is the raw material from which we build our applications so the better we make our Data objects the more powerful the applications we can build.

In our initial example in this section I mentioned a list that contained both names and addresses. Then I proceeded to talk about the data as though it only contained the names of persons. I am sure many of you caught me on this one. I didn't forget or overlook the necessary connection between a particular person and the address or addresses that are related to that person. Rather I needed to give you a sense of the difference between Data objects and business objects before we tackled the concept of Connector objects.

Connector Objects

As the name implies, a Connector object is a special kind of Data object – one that is charged with the responsibility of managing the relationships between objects. This object is constructed with the same attention to detail as the other Data objects we talked about earlier. In other words, it has a complete audit trail and so on. That means that not only can we look at a relationship between two objects, but we can also look at the history of relationship(s) for an object. Yeah, that's right, this person used to be connected to this address etc. Powerful stuff! Anyway, look at the new image for the business object with the addition of the Connector object. Notice that the business rules are still quite separate from any of the three Data objects:

Cc750267.image188(en-us,TechNet.10).gif

This means that we can use the business rules to control any or all of the three Data objects as required by the problem the business object is designed to solve. Now that you can see the relationship visually, let's take a closer look at the concept of Connector objects. We will use the carpenter example.

Most carpenters use more than wood as their raw material. They may use nails, screws, or glue, etc. Think about where the carpenter's skill and talent come into play when he is building a chair. Does his knowledge about what to do with nails come with the nails? No. It is part of the carpenter's skill and talent. Does his knowledge about what to do with nails come from the physical connection between the wood and nails he is using at this moment? No. It is part of the carpenter's skill and talent developed from countless experiences with the connection of two types of raw materials. In other words, the carpenter draws upon his skill and talent (business rules) to allow him to select the correct pieces of wood, the correct connector – nail, screw, or glue - to join those pieces of wood together. There is a real difference between an actual physical connection that the carpenter creates – the nail, screw, or glue, and his understanding of how to create the connection – his skill and talent. This difference is very much like the difference between the Connector Data object and the business rules in a business object. It is possible to separate the knowledge required to insert a relationship into a database from the knowledge required to determine that the relationship is important. The data insertion is very much like a mechanical device, a nail, a screw, a Connector object, while the reasoning that goes into understanding that the relationship is important is dependent upon the skill and talent of the carpenter, or of the business rules that an organization employs.

Let's reconsider our list of persons and addresses. Suppose that in one instance we needed a business object that tracked customers at their home address – maybe we are selling swimming pools. And in another instance we need a business object that tracked customers at their business address – maybe we are selling office furniture. In these cases, our different sets of business rules tell us which relationship Customer-Home Address or Customer-Business Address that we need to use, but it doesn't tell us anything about the actual physical data connection. That is the same in both cases. The actual physical data connection is managed by a mechanical device and is not dependent upon the business rules. That means that we can solve both problems with three objects, a Person Object – extended into a Customer with a Property Bag, an Address object, and a Connector object that we use to express the relationship between the Person and Address objects. We can use exactly the same three objects (and exactly the same data store) in either case. The business rules alone make one application different from the other.

Separation of content and function – this is the key.

I also let one other thing slide as we talked earlier. A couple of times now I have shown you images with one or more data objects and some business rules shown in a container called a business object. That container or business object is actually the last major concept in this book.

Veneers

When I think about the way that business rules envelope the underlying Data objects, I always imagine that a business object (or component) is kind of like an M & M candy. You know a chocolate, or peanut, center covered by a thin coating or veneer – a candy shell. This candy coating protects your hands from being covered in chocolate – the ultimate expression of encapsulation. Well just like M & Ms use a candy veneer to encapsulate the peanut or chocolate center, we use a software veneer to encapsulate the data and business rules at the center of our business object:

Cc750267.image189(en-us,TechNet.10).gif

This gives us a way to work with several Data objects related by a set of business rules as one unit of information. As you might guess, it is possible to encapsulate several Data objects and some business rules into a business object. It is also possible to use a veneer to encapsulate several business objects into another veneer that we call an application:

Cc750267.image190(en-us,TechNet.10).gif

This means that our concept of an application has changed. Now we can consider applications as a collection of business objects that are glued together with a set of business rules and maybe a couple of Connector objects. This gives us the ability to do something like construct a contact management application for the sales department and then turn around and use exactly the same application – including the data, as a component in our marketing or even our billing application. I know that probably sounds like a stretch, so let me break it down for you.

Consider what a contact management application is. It manages the names, addresses, telephone numbers, and maybe email addresses for a group of people. What information do you think the marketing department needs when it executes a nationwide direct mail campaign? What information do you think the billing department needs in order to send bills out to customers? If you were a salesman, do you think that your contact management application would be enhanced if you could include things like purchase history? I am going to leave you to fathom the possibilities.

Pulling the Pieces Together

Before we begin to look at how to configure an Enterprise Caliber system, let's take a minute or two and try to get a sense of how the applications designed to run on a true distributed architecture differ from their more monolithic predecessors. There are three systems shown in the following images, a thin client system, a fat client system, and a multi-tier system:

Cc750267.image191(en-us,TechNet.10).gif

In the first two systems on, the processing is shared between the server and the client machines to varying degrees. We use the terms, thin client and fat client, in an effort to quantify how much processing occurs at the client level. This is probably how most people, including hardware/integration specialists, view a standard client-server environment. In some very rudimentary sense, these are actually examples of distributed architecture. If you view the enterprise as either one of these systems, it seems perfectly reasonable to ask questions like where is the application going to be installed – on the client or on the server. But, true distributed architecture systems really look more like this third system. Notice how the same question about where to install the application doesn't really apply:

image192

In a distributed architecture, an application isn't really installed on any one machine. It just exists throughout the entire system. This can be a disconcerting concept. Exactly how does an application exist throughout an entire system??? Like any other engineering problem, it's easier to understand the problem if we apply the divide and conquer principle. Let's break the big problem down into more manageable pieces. For our purposes, we will identify those pieces as the different sets of processes that must occur to deliver an application to a client.

The Enterprise Caliber Physical System

I know this is starting to move away from the physical and into the logical realms of design, but remember that you can't solve this problem with hardware alone. What we need to do is to integrate both the hardware and software elements of our system into a single functional unit that is capable of delivering the highest level of service to the largest number of users. Anyway, if we overlook the system management functions we spoke about earlier, we can express just about any application in terms of the following 4 minor processes:

  • Data Storage Processes

  • Data Manipulation Processes

  • Data/Business Rule Integration Processes

  • Presentation Processes

I am not going to go into a discussion about the details of each process; well be getting into that shortly. I think that for our purposes here, the names are almost self-explanatory. The more important thing to notice from a physical perspective is the relative locations of the processes. The information must pass through all four processing centers (machines) before it can be displayed to the client or consigned to the data store.

At first blush, I am sure that this multi-tier/multi-machine notion might seem like a ridiculous thing. But, remember that we can't solve this problem with hardware alone. What we are doing here is identifying hardware modules that are available for use within our physical design. This modularization of the applications across the entire system allows us to exercise a higher degree of control over the system's performance at the hardware level. We need to go a little further before all of this will fall into place, but think about this for a minute. If an application is distributed across several machines in this fashion, then we can identify processing bottlenecks as a function of CPU utilization, Disk IO, etc.

In other words, we can learn exactly where to add the horsepower to the system to get the most bang for our buck. For example, if the techs learn through standard hardware analysis techniques that the heavy processing in the Data/Business Rule Integration machine is overtaxing that server's CPU. They can pass this information on to the development team to see if they can improve the process' design. If they can't improve the process, we still have the option of "throwing more hardware at the problem" but in this case, we don't simply throw hardware at an amorphous problem, we direct the hardware at the root of cause of the problem. We can add a server or two (or 3 or 4 or 5) to the Data/Business Rule Integration processing tier of the system and address the processing shortfall directly.

When you read that last paragraph, I hoped you noticed that our distributed architecture design can grow horizontally as well as vertically. Don't worry I am going to explain that last sentence. Take a look at this next image:

image193

Notice that in the Data/Business Rule Integration Process server tier there are three servers rather than a single server as in the other processing tiers. This is an example of horizontal growth.

All it means is that we can add multiple servers to any of the processing tiers as needed.

This is a subtle point, but notice that we are no longer limited to adding machines to just the Data Storage processing tier. Instead, when we determine that there is a need for more processing power, we can apply that power with pinpoint precision to the root cause of the processing shortfall.

This capacity for horizontal growth can also help to make a single system capable of handling requests from virtually any number of clients. We will take a look at exactly how to do that shortly but first, we need to make sure that we handle the overall system management functions that control the users' access to the system's resources. To do that we will need a way to measure the clients' real-time draw upon the system's resources.

Take a look at the next image; pay special attention to the way that the client machines are connected to the two servers. Notice that in this image all of the clients are connected to a cloud rather than to any one server:

Cc750267.image194(en-us,TechNet.10).gif

The next thing to notice is that Server A is depicted as larger than Server B. This is not intended to illustrate the relative power of each server, but rather to give us a sense of the relative availability of each server. In other words, consider that both servers are essentially equal except that Server B has the added responsibility of monitoring the available resources and dispatching the client to the least busy resource – either to Server A or to itself. This added responsibility means that Server B will be less available to the clients - that is why it is shown as being smaller.

If we combine the two concepts we just covered, we have just about all of the tools we need to construct a distributed physical system that is capable of meeting the needs of any number of users while also being flexible enough to be able to grow to meet any future needs as well.

For those of you that are still here, I am going to make some broad statements concerning the different characteristics of the different types of processing. Actually, what I am going to do is to give you some rules of thumb concerning the relative amount of processing time each minor process requires. You can always use these general rules to design a good first cut at a physical enterprise installation.

  • Data Storage processes excute relatively quickly because they only have to deal with reading and writing to the data store.

  • Data Manipulation processes take longer to exectue than Data Storage processes because they inherently depend upon the Data Storage process completing before they themselves are complete.

  • Data/Business Rule Integration processes take longer again because they depend upon the first 2 processes finishing before they can complete their own execution.

  • Presentation processes take the longest to finish because they depend upon the other 3 processes finishing before they can complete their own execution.

Think about the four broad statements above. Then take a look at the next image. Notice that the number of servers on each tier mirrors the processing time requirements outlined in the four statements. We said that Presentation processes took the longest time to complete, so we added more servers to this tier. Data/Business Rule Integration processes took the second longest time to complete so we have added less servers to this tier, and so on.

Cc750267.image195(en-us,TechNet.10).gif

Think about the difference between this design and the one we looked at earlier with three separate database servers each serving their own pod of users. In that design, we incorrectly identified the processing bottleneck as occurring on the database server. So, to remedy that problem we added more database servers to the enterprise. The end result was that we had many pockets of data spread throughout the organization that must be bridged to give us an overall picture of our data. The best way I can think of to describe that kind of design is to say that it represents a distributed data system. There are multiple pockets of data each serving different pods of users. Distributed architecture does not have to mean distributed data!

The goal of distributed architecture is to distribute the processing load across as many resources as necessary NOT to distribute the data within the system.

The Enterprise Caliber Component System

Ok, now that you have a sense of how the individual pieces that make up an object-oriented application fit together, it is time to take another look at the objects from a different perspective. Remember our overall goal, we are designing an enterprise, and for us that means that we need to be able to deploy our applications, or our objects, across a distributed architecture. As we saw in the physical section, distributed architecture requires more than just adding a couple of machines to a system. In order to get any benefit from a distributed physical architecture, we must build Data objects that are designed to take advantage of that architecture. Generally, this means that we must design each object as several distinct set of processes. We've already defined these processes but we have yet to discuss them in any detail:

  • Data Storage Processes – These processes are responsible for handling the physical reads and writes to the data store. To keep things simple, we will consider that these processes are under the control of an ODBC compliant Database Management System like SQL Server or Oracle. That means that for our purposes we can consider the data storage processes a collection of tables, stored procedures, and views in a relational database.

  • Data Manipulation Processes – These processes remind me of a card catalog in a library. We use them to find out exactly where the data is, or should be, located. That means for our purposes, these cards would contain the names and parameter requirements for the stored procedures in our relational database and have the ability to execute those stored procedures.

  • Data/Business Rule Integration Processes – These processes are used to add value to an organization's data. Remember the carpenter example. These processes are responsible for combining an organization's data with the organization's talent. That means for our purposes, these processes might perform tasks like special calculations or manage important relationships.

  • Presentation Processes – These processes are responsible for delivering information to the user and retrieving information from the user. This is really just the typical GUI or reporting system for an application. In the physical system, these processes might exist in many different places. However, I believe that in most newly developed applications, these processes should probably be executed at a web server like IIS for instance.

This means that each one of the objects we described earlier, like the Person object or the Address object, is really comprised of four separate sections of code or DLLs. It is a common practice in the industry to call each of these sections of code a tier. Anyway, just as we physically segregated the processes onto separate servers in a physical treatment, we must also segregate the different sections of code onto different tiers in a logical treatment:

Cc750267.image196(en-us,TechNet.10).gif

This is actually what makes the physical treatment possible. It wouldn't matter if we had 1000 machines if the code is a single block that can only be run on a single machine.

The sections of code are split out as follows:

  • The sections of code that handle the Data Storage processes are found in the Data tier.

  • The sections of code that handle the Data Manipulation processes are found in the Data Centric tier.

  • The sections of code that handle the Data/Business Rule Integration processes are found in the User Centric tier.

  • The sections of code that handle the Presentation processes are found in the Presentation tier.

Using both hardware and software together, we have isolated each of the processes that an application requires onto a different machine or bank of machines. We can use this isolation to add processing power, either physical by adding another server(s), or virtual by adding another instance(s) of the process in exactly the place it will do the most good. For example, if we found that we were having a processing bottleneck on the User Centric tier and that the servers in that bank still had plenty of CPU available, then we could just start another instance of our person object's user centric processes on that machine.

This ability to pinpoint the source of processing (IO etc.) bottlenecks and cure them with the same precision is the hallmark of good enterprise design.

Book Overview

This book is divided into three sections. If you are a programmer, then I would strongly recommend starting at the beginning and working your way through each of the chapters of the book in turn. Please don't skip the first section. Even though the first section presents materials that have to do with things like hardware and management techniques, this entire book is really all about programming. As you work through the sections on fault tolerance, parallel processing, and security you will learn the basic foundational concepts that are deeply ingrained in every facet of world-class enterprise design. In other words, in order to design programs that take full advantage of distributed architecture you really need to understand, at least logically, how a server farm works.

If you are not a programmer, then I would still suggest reading through each of the chapters of the book in turn. Although the information in the first and last section of the book is technical, most of those sections are written in plain English – and you can comfortably skip through any sections of code that are in those sections without missing a beat. When you get to the middle section of the book, I would recommend that you just read through the first 10 or 20 pages of each chapter – don't worry, if you are not a programmer, you will know where to stop. In this section, I cover the major concept and the major functionality at the beginning of each chapter. This information is really all you need to know to be able to spec-out applications using this technique, but don't be fooled. You do need to know it. If you're a manager or team leader, let me say that, "While it is true that you don't have to know exactly how electricity works to use the light switch, you do have to know where the light switch is and how to turn it on in order see in the dark". The first 10 or 20 pages of each chapter in this section will show you where the light switch is and tell you exactly what you should expect will happen when you turn it on or off.

Section I - The Infrastructure

The first section of this book is designed to give you a solid base of information that you can draw upon when you address the issues that will arise when you are installing an NT based Enterprise Caliber system for your organization. By the time you finish this section of the book, you will have a good sense of the infrastructure (both hardware and operating system) options that are available to you.

In the next chapter, we will take a closer look at the information we need to consider when we translate an organization's information processing requirements into the actual infrastructure. In order to do that we will consider the following three topics:

  • First, we'll develop an understanding about the concept of fault tolerance.

  • Second, we'll consider design techniques that we can use to create a system that is capable of true parallel processing.

  • Third, we'll take a look at some of the things we need to consider when we attack the issue of enterprise security. In this book, I use a particular technique that I call inherent system security. This technique uses hardware, operating system, and application level controls to ensure that the system's users are guarded from any of the harms that typically befall them.

In the last chapter in this section, we will take a look at some real concrete measures we can take to construct a dynamic physical system that is capable of growing with our organization. To do this, we will cover four major topics:

  • First, we will cover the conceptual/physical design of server farms without hardware level clustering.

  • Second we will develop an understanding of hardware level clustering using the current version of Microsoft Cluster Server.

  • Third, we will cover the design of server farms that can take advantage of hardware level clustering.

  • Finally, we will look at techniques we can use to combine hardware clustering with MSCS and Windows Load Balancing Services (WLBS) to provide the most flexible solution with a much lower total cost of ownership.

Section II - Enterprise Caliber Data Objects

In this section of the book, we are going to learn how to construct Data objects that are truly worthy of being deployed across a world-class enterprise installation. I have employed a tried and true engineering paradigm in this section of the book. In other words, we are going to learn to build the best black boxes anyone has ever seen. This also means that despite some of the experts' incorrect opinions to the contrary, we will learn to build completely reusable data objects. Count on it.

Of course, it almost goes without saying that these objects will have the ability to manage information in the data store, which means that they can insert, update, delete, and retrieve information from the data store. But, these objects, designed to complement the Windows NT security model, also keep a complete audit history of every action that any user has ever taken against each object. This audit history is one part of the overall system security model, but it also offers some additional benefits. It will allow us to provide any user with the ability to perform unlimited undos.

You know, when you make a mistake typing a line or two of data, you press Ctrl + Z or you select Undo from the Edit menu and the application takes you back step-by-step through time until your data is restored to its previous, correct, state.

This functionality is inherent in every Data object. And, using the same audit history, we will also offer the user the ability to restore an object from a deleted state. Of course, we are professionals, so when we design a user interface we will ensure that any user can always perform these actions with, at most, a couple of intuitive mouse clicks. Really! I am not kidding.

The major thrust of this book is Enterprise Architecture, so we will do our part in this section by learning to design and build Data objects that can take full advantage of a distributed architecture. In practice, that means that we will build each object as two or more dynamic link libraries that can be installed on different machines. This is one of the key things we must understand to take full advantage of the parallel processing capabilities we have designed into our server farm.

I don't want to be a nag, but even if you hate hardware, don't forget to read through the part on server farm design. It is basic information for programmers who design code for true parallel processing systems. Yes! That is exactly what you are going to do.

Section III - Building Enterprise Caliber Applications

In this section of the book, we will begin the process of pulling together the pieces that we examined in the two previous sections. We still need to cover a couple of VB coding techniques, but we will shift our major focus here from using VB to create Data objects to using VB to create business objects and applications. In a distributed architecture, an application doesn't have to be an executable that sits on the client's desktop. It can be a lot of different things including a dynamic link library (DLL) that can exist anywhere within the enterprise. This application, or veneer DLL, contains all of the business rules that an organization applies to its data for a particular application. The veneer is designed to both apply those rules and bring together all of the different Data objects an application needs into a single point of contact that can be used for programming any type of user interface.

Once we have learned how to use Data objects, business objects, and veneers to build applications, we will take another look at some of the rewards our separation of data, business rules, and the user interfaces in the previous sections has wrought. We will look at some of our options concerning the development of interfaces. We will also take a look at some of the different options available for developing user interfaces. In this section of the book, I will focus on a particular method for application deployment – Active Server Pages.

Active Server Pages (ASP) allow us to deliver applications, literally, to any place in the world without ever touching the individual desktop machines. This means that we can develop a user interface for an application today, install it on our server farm today, and virtually any number of users can begin using that application today. That is precisely the kind of efficiency you need to deliver your enterprise's services to your organization. Of course, the term ASP is a very generic term. You can be sure that we will approach ASP with the same dedication to quality that we applied to our data objects. In this section, we will learn to combine ASP, veneers, VB Script, and DHTML to deliver world-class interfaces to the extents of our enterprise.

Summary

Phew! For a first chapter we've already covered a lot of ground. Don't worry, we'll be coming back to many of the concepts I've introduced here throughout the rest of the book, but I wanted to give you something of an idea of what's on the trail ahead.

Trying to walk in the dark is difficult enough but trying to do it with a wrong image of the route is harder still, and that's what this first chapter has really all been about. Not only did I want to you to give you some brief background to some of the key concepts that we'll be developing over the course of the following chapters. But, perhaps more importantly, I also wanted to broaden your horizons so that you can go into the rest of the book realizing that many of the design principles we will be using are indeed viable. There are many misconceptions regarding enterprise architecture and I wanted to straighten these out before we began the real work.

In the next few chapters, we'll be looking at an Enterprise Caliber infrastructure before moving onto looking at some code behind the Enterprise Caliber Data Objects.

We at Microsoft Corporation hope that the information in this work is valuable to you. Your use of the information contained in this work, however, is at your sole risk. All information in this work is provided "as -is", without any warranty, whether express or implied, of its accuracy, completeness, fitness for a particular purpose, title or non-infringement, and none of the third-party products or information mentioned in the work are authored, recommended, supported or guaranteed by Microsoft Corporation. Microsoft Corporation shall not be liable for any damages you may sustain by using this information, whether direct, indirect, special, incidental or consequential, even if it has been advised of the possibility of such damages. All prices for products mentioned in this document are subject to change without notice.

Link
Click to order