Integrating Applications with Message Queuing Middleware
|Archived content. No warranty is made as to technical accuracy. Content may contain URLs that were valid when originally published, but now link to sites or pages that no longer exist.|
For a variety of reasons, many corporations would like to achieve higher levels of integration between their various line-of-business applications. "Integration" also includes adding Web-based front-ends to existing "back-end'" applications. Yet, conventional forms of communication, including remote data access and RPCs, are limited in this role. This article discusses the challenges of application integration and describes the benefits offered by message queuing middleware (MQM) products such as Microsoft® Message Queue Server (MSMQ).
On This Page
'Network computing' is a very ambiguous term these days. Some would say that a 3270-based application is network-based because uses a network to connect the terminal to the mainframe. Or, that a 2-tiered client/server application is network-based because data flows between the desktop application and the RDBMS over a network. An increasingly relevant definition of the term applies to virtual applications that are assembled from many different components that run on many machines across a network as if they were a single system. Components can range from entire centralized applications to single modules of a larger distributed application. Most important, it is becoming clear that only virtual applications can deliver the flexibility to meet many of the lead-edge needs of today's corporate information systems.
A Powerful Status Quo: Centralized Computing
Based on the virtual application definition of network computing, the vast majority of online business critical applications today are not network-ready. They are built using centralized approaches where the business rules and data that comprise the application reside on a single mainframe or network server, and lack ways to participate as components on the network.
An important consequence of centralized applications is the emergence of application stovepipes that are owned and operated by domains within a company. Stovepipe applications usually do a great job of servicing the needs of their domains (e.g., Marketing, Accounting or Shipping), but typically they evolve in isolation from other domains. Domain owners – such as the V.P. of Sales – care mostly about the smooth operation of their part of the business. Consequently, domains expend very little effort on building applications that can interoperate with applications in other domains.
Unfortunately, there is an ever-increasing opportunity cost to the business as a whole of avoiding or deferring virtual application development. The excuse that 'virtual applications are too hard to build and deploy' is becoming less and less acceptable to management, even when one considers the challenges.
The Challenges of Integrating Stovepipe Applications
Most companies have already tried to integrate at least some of their stovepipe applications. For example, a common goal is to provide a common view of the customer, where the customer's identity frequently spans several ownership domains and stovepipe applications. In spite of the desire, most efforts have not been totally successful because:
Integrating applications from different domains almost always means integrating applications built at different times by different people using different tools and different database structures. Resolving so many differences presents huge architectural and technical challenges.
Communication technologies that are good for building centralized applications (for example, OSF DCE, CORBA IIOP, SNA/LU6.2, SQL*Net or ODBC) work less well as the glue for virtual applications. These examples fall into the category of tightly coupled communications technology because they require simultaneous connections to be made online between senders and receivers, and that senders and receivers share protocol, parameter and data formats. Given the nature of stovepipe applications, simultaneously satisfying all these requirements is rarely achieved.
Many legacy applications were built using technology that was never designed for integration of any sort.
Most packaged applications acquired from Independent Software Vendors (ISVs) traditionally have not been designed with application integration in mind.
Interestingly, none of these technically-oriented reasons presents an insurmountable problem. Yet, even if they are overcome, many attempts to build virtual applications will still fail for a distinctly human reason. Owners of one application domain are rarely sympathetic to the needs of owners of other domains. They resist changes to their own applications—offering justifications that include concern for stability, availability, and performance.
Perhaps even more important is the appreciation that there are less sophisticated (and risky) off-line ways to get application stovepipes to work with each other. In particular, file transfers combined with batch updates work well enough for many integration needs. The bottom line is that the compelling business reasons to endure the cost, complexity and risk of integrating stovepipe applications – and suitable technology solutions – until recently have been missing.
Profound Change is Coming
Many profound changes are forcing organizations to rethink and broaden their views on virtual application computing:
Customer care initiatives are forcing companies to present a single, near-real time view of the customer and his or her relationship to the company, even though information about the customer exists across multiple stovepipe applications and domains of ownership.
The once per night frequency of batch/file-transfer approaches is no longer timely enough for many applications. This is particularly apparent in supply chain management where competitive advantages are increasingly coming from near-real time data collection and propagation.
A new style of business event-based application is emerging where activities in one domain – such as a debit to inventory – must cause some number of other applications in other domains (from replenishment applications to modeling spreadsheets) to perform a related action.
Mobile computing is quickly becoming a way of life. Unfortunately, its fundamental properties are incompatible with centralized architectures and tightly coupled communication techniques.
In fact, the ability to deliver reliable network-based applications has become a significant competitive differentiator in many industries and an operating requirement in others. To only to maintain parity in their industries, businesses need to move beyond their comfortably familiar centralized applications.
To build successful virtual applications, companies must address two sets of requirements. First, applications interfaces must be written in a way that hides the internal workings of the component. Since the internals of applications are bound to change, any approach that requires components to understand each others' internal workings in order to remain synchronized will ultimately break down. Second, the technology used to communicate between components over the network must satisfy three technical requirements:
Simultaneous connections must not be required between components in order to communicate (networks are frequently unavailable, and senders and receivers may run at different times).
There must be extremely strong guarantees that data sent between components will not be lost, reordered, or duplicated (if communication must work over unreliable connections and between components that run at different times, it must be completely trustworthy).
It must be possible to translate the data as it flows between components (not all components will interpret data in the same way).Figure 2: Application Integration with MQM
Surprisingly, there is also a vitally important business requirement as well. The business model behind any technological solution must promote adoption by the Independent Software Vendors (ISVs) that build the majority of packaged applications. Without ISV adoption, companies will be unable to integrate purchased applications into their network-computing infrastructure.
An Emerging Solution: Message Queuing Middleware
One technology stands out as a solution for building virtual applications: Message Queuing Middleware (MQM). Using MQM, components communicate with each other as a series of messages. Messages can contain any data that is understood by both sender and receiver, such as a request for information or a response. While in transit between components, MQM services keep messages in holding areas called queues – hence the name message queuing middleware. Queues protect messages from being lost in transit and provide a place for receivers to look for messages when they are ready. Communicating between components using MQM services offers a number of benefits:
Components can use MQM services to send messages and then continue processing regardless of whether the receiving component is running or reachable over the network. The receiver may be unreachable because of a network problem, or be naturally disconnected, as in the case of mobile users who only connect periodically to the network.
When networks become available or receiving components are ready to process requests, MQM software will deliver any waiting messages.
MQM software use powerful techniques to make sure that messages do not get lost in transit, delivered out of order or delivered more than once. In other words, MQM offers the level of reliability required by business-critical applications.
MQM can also route messages efficiently around failed machines and network bottlenecks. Administrators can configure redundant communications paths to ensure availability.
Perhaps most important, communicating via messages does not require that components be aware of each others' implementation details. Developers can use MQM services, along with protocol and message translators, to bridge between dissimilar component architectures. As long as the sending component can produce a message using one MQM Provider and the receiving component can accept messages from another MQM Provider, it is a straightforward process – for the first time – to convert between network protocols and message formats.
MQM and Stovepipe Integration
Within most corporations, the most important use for MQM is to lay the groundwork for stovepipe integration. Organizations with a need to integrate stovepipe applications should consider MQM-based strategies for at least the following reasons:
Once components adopt an MQM-based infrastructure, they become able to accept messages from other components. This dramatically simplifies stovepipe integration.
Adding MQM-based interfaces to components that were built without network-based interfaces is usually easier than adding interfaces built on conventional communication technologies.
Translating messages from one MQM format into another is a straightforward process (translation is needed when sending and receiving applications are built using different MQM services).
MQM facilitates transformation of messages from one format to another. Transformation enables domain owners to change their components without affecting components in other application domains – avoiding the ripple effect that has constantly plagued network-based applications.
MQM also represents an ideal way to add web-based interfaces to applications. For example web server-based scripts can accept a user's input over HTTP, make requests to components via messages and return control immediately to the browser. Actual message processing, which is often time-consuming, can occur later.
Is MQM New?
The simple answer is 'no'. MQM has been in use in certain industries, such as telecommunications, airlines and financial services, for years. The downside was that most companies wrote their own message queuing middleware because of a lack of affordable, industrial strength products. In addition, the sheer cost to develop all the needed critical features – such as guaranteed delivery, message routing and once-only delivery was prohibitive. Most custom-built solutions ended up being somewhat primitive or highly specialized and expensive to maintain.
IBM's delivery of MQSeries in 1994 began changing the message queuing equation. For the first time, application developers in any industry could build network-computing applications using powerful (yet expensive and complex) off-the-shelf message queuing technology. More recently, Microsoft has identified the opportunity for MQM in volume and ISV markets, and is providing Microsoft® Message Queue Server (MSMQ) as a service of Windows NT® Server and Windows NT Server Enterprise Edition at no additional charge. And, via products such as the MSMQ-MQSeries Bridge, companies can use MSMQ and MQSeries together for a best-of-both-worlds solution.
Call to Action
The bottom line is that until there is a movement towards MQM-enabled applications, the goal of virtual applications will remain beyond reach. For companies that want to build integrated applications now, there are several steps to take:
Demand that application developers add MQM-enabled interfaces to stovepipe applications to facilitate integration.
Require new application development to exploit MQM as a fundamental infrastructure (which, in turn, provides MQM-enabled interfaces).
Acquire ISV application products that have some form of MQM interface (make the assumption that no application should exist in isolation forever).
Unless end user companies take action and advocate building and buying MQM-enabled applications, inertia almost guarantees that little will change.
For More Information
For the latest information on Windows NT Server, check out our World Wide Web site at http://www.microsoft.com/ntserver or the Windows NT Server Forum on MSN™, The Microsoft Network (GO WORD: MSNTS).