HPC and UFOs Explained!
I had several discussions about High Performance Computing over the past week or so with people in different job roles.
HPC is for building UFOs
Now that Iʼve got your attention, let me try and explain: I would like to take a field of application - namely aeronautics - and take you through the design process that engineers in that field follow. Iʼll then try and explain where Microsoft technologies fit in that process. I am no expert in aeronautics - itʼs just a personal interest of mine; so, if you find any mistakes or just think I have smoked one too many, feel free to comment!
Aircraft designers typically use a variety of CAD / CAE programs to study the aircraft geometry on their workstations. A good example is IBM's CATIA.
The geometry thus generated is then discretized for finite-element analysis. This step is often called pre-processing or mesh generation. Material properties and boundary conditions (loads, constraints) are associated to the mesh elements. A variety of mathematical models are produced to study structural stress, pressure distribution, heat distribution, lift & drag, etc...
Software like Ansys is often used for mesh generation. Several engineering outfits write their own mesh generators as well. These packages may take advantage of parallel computing.
The models thus created are then computed by separate packages (or modules within the same commercial package) called solvers or processors. These applications definitely benefit from parallel computing and are often written for high-performance computing clusters. To put it simply, the meshes are partitioned amongst computing nodes. Each of those solves equations on the elements that pertain to it, then communicates with the neighboring nodes over low-latency links to pass boundary results. Once every node has finished, we have an overall picture of the approximated solution. Several solvers are available in the public domain. Again, part of the “secret sauce” of engineering companies is in their own solvers, their algorithms, speed and precision.
Figure 4: Pressure distribution over a BWB glider
The visualization step is as important as the computation itself, because it is on the visualized data that design decisions are made. The visualization process itself may benefit from parallel processing, depending largely on the quantity of data at hand.
What has Microsoft got to do with it?
Our platform, though, can do much more than that.
First of all, the data-handling requirements of such process are huge. Not only the file sizes involved are typically in order of gigabytes, but also those files do not mean much unless the design and simulation parameters that were used are associated to them.
Figure 6: Structural, Aerodynamic and Aeroelastic Design Concept Model
Microsoft's Workflow Foundation (WF) offers the building blocks for the application logic that automates such complex workflows. It also exposes several services, like persistence and tracking, that are extremely useful to manage the complexity of such processes, making interaction with them much easier. With Visual Studio one can construct such application logic workflows and then host them as services on Windows Server. The communication amongst those services can be handled by Windows
The compute cluster per se can also be conceived as a “computing” service exposed to the application logic via a WCF interface. In fact, thatʼs what weʼre building into HPC Server 2008.
Thirdly, HPC Server 2008 offers a computing platform for your solvers that is easy to integrate in your engineering workflows, thanks to the variety of interfaces it exposes. Its management can be mostly demanded to a traditional IT organization, thus leaving the engineers free to focus on the design. It is a relatively small and conce