Skip to main content
Rate:  

HPC in Media 

Posted By:  Giovanni Marchetti 
Publish Date: 4/24/2008


In my last post, I explained how HPC can be used to build UFOs. I have since learned about using HPC to make movies. I’d never have guessed, so let me share my surprise here.

Digital media production follows a complex workflows, from initial sketch to wireframe model, to rendered 3D images, to movies. HPC is typically used in rendering, encoding or transcoding.

Although Microsoft offers a solution to manage the workflow ( Interactive Media Manager) and to encode ( windows media encoder, expression encoder), we rely on partners for the rendering, most transcoding and non-windows media formats.

Rendering

Rendering is the process of transforming a 3-D model of an object (often displayed as a wireframe) into a 2-D image. Traditional non-interactive rendering (e.g. by ray-tracing) is a good example of an embarrassingly parallel problem. Several frames can be rendered at once, independently, one per computing node. The rendering software often also uses multi-threading on the node to compute several channels at once (e.g. diffuse light, reflected light, occlusions, refraction), then it composites those channels to generate the final image.

Partners that are active in this space are:

- Autodesk with Maya

- Softimage with XSI / Ray3

- Mental Images with Mental Ray

All of those applications exist for Windows Server 2003 and are being ported to Server 2008. They offer varying levels of integration with our compute cluster scheduler. With built a simple demonstration with Softimage, for instance. We submitted a job with 25 frames to render on a small cluster with 1 head node and 3 compute nodes, each with 4 cores. The job was created as a parametric sweep with each task invoking the ray-tracing software installed on the node (Ray 3) on one of those frames. Ray 3 is multi-threaded and can render 8 channels, so we gave each task the full 4 cores per node.

Here’s what the rendered frame 1 looks like:

clip_image004

Here’s an example of one of the channels computed for the image above, i.e. the intensity of reflected light:

clip_image006

Encoding

Encoding a digital movie file (wmv, mpeg2 for instance) is the process of exploiting the redundancy in the sequence of frames making up that movie in such a way that fewer bits are necessary to represent them than in the uncompressed stream.

There are two kinds of redundancy:

- spatial, due to the correlation of neighboring pixels.

- temporal, due to the correlation of subsequent frames.

Thus, we could split a sequence of frames into a finite number of sections, then assign each section for computation to a separate node. We could further split each frame in the section into a number of tiles and distribute the tiles amongst cores within the node for encoding.

Microsoft has developed a VC-1 encoder (used for BluRay DVDs amongst others) and an SDK for it that enable parallel encoding. They are licensed on a commercial basis to ISVs and OEMs. An example of use is in Sonic’s Parallel Stream Encoder.

Another popular encoding format is MPEG2 (DVD, digital TV), for which several open source encoder implementations exist (Berkeley, Cornell).

The H264 standard is also widely supported and parallel encoder implementations are freely available for it (e.g. ELDER)