TechNet Magazine > Home > Issues > 2007 > July >  Windows PowerShell: Rethinking the Pipeline
Windows PowerShell Rethinking the Pipeline
Don Jones


Much has been said about how Windows PowerShell is different, new, and exciting—though it is based on command-line interface concepts that have been around for decades, primarily in UNIX and Linux-based operating systems. But the common terminology Windows PowerShell shares with its antecedents can make it easy to
overlook the real flexibility and uniqueness of Windows PowerShell™—and its eminent suitability for a Windows® environment.
One of the most often talked-about features of Windows PowerShell is its pipeline, but it’s also, unfortunately, one of the most misunderstood features. That’s because it relies on terminology that was defined during the early 1970s, which once represented entirely different and less-powerful functionality.

The Origin of Pipes
One of the first UNIX shells ever created was the Thompson shell, which was very primitive and featured only the most basic scripting language elements with no variables. The shell had a deliberately modest design as its only real purpose was to execute programs. However, it introduced a key concept that improved upon other shells of the time: pipes. By using the < and > symbols, the shell could be instructed to redirect input and output to and from different commands. A user was now able to, for example, redirect command output to a file.
This syntax was later expanded so that the output of one command could also be piped to the input of another command, allowing long sequences of commands to be chained together to accomplish more complicated tasks. By version 4 of the shell, the vertical line character "|" was adopted for piping and hence became known as the pipe character. Even the earliest versions of MS-DOS® implemented basic piping through these characters, allowing a user to, for example, pipe the output of the type command into the input of the more command, creating a one-page-at-a-time display for long text files.
Although the Thompson shell was widely regarded as inadequate by the time UNIX version 6 was released in 1975, the concept of pipes was well-embedded with shell developers and users and has been carried forward into a number of technologies in use today.

Piping Text
A limitation of almost all shells is their inherently text-based nature. In UNIX-based operating systems, this isn’t actually a limitation, but rather a reflection of how the OS itself works. Nearly any resource in UNIX can be represented as a file of some sort, meaning the ability to pipe text from one command to another provides a great deal of power and flexibility.
Text, however, is definitely limiting when it comes to management information. For example, if I were to present you with a list of services running on a Windows computer, you would almost certainly be able to make sense of it. Perhaps I’d place the service name in the first column and its startup mode in the second. Your powerful human brain would transparently and instantaneously parse, or translate, the text display into meaningful information that you could work with. Computers, though, aren’t nearly that smart: to have a computer do something meaningful with that list, you’d have to tell the computer that the first column consists of characters 1 through 20, perhaps, and that characters 22 through 40 make up the second column, and so on.
For ages, this type of text file parsing was the only way administrators had to chain multiple commands together. In fact, scripting languages such as VBScript and Perl excel at string manipulation primarily because they need to be able to accept text output from one program or command and then parse that output into some kind of useful data that can be used for some subsequent task. I’ve written VBScript jobs, for example, that accepted the text output of the Dir command, parsed the output for file names and dates, and then moved old, unused files to an archive location. String parsing is exceedingly tricky, because exceptions—variations in the input data—almost always occur, requiring you to redo the logic in the script to deal with all the possible permutations.
As a form of administrative scripting or automation in a Windows environment, string parsing is less useful. This is because Windows itself doesn’t store much information in an easy-to-access text format. Instead, it uses datacentric stores, such as Active Directory®, the Windows registry, and the certificate store, so that scripters have to use one tool first to generate some form of text output and then a script to parse that text and do something with it.

Objects Are Easier
Windows software developers have always had things a bit easier. Initially, Microsoft developed COM specifically to represent the complex internal workings of Windows in an easier-to-use fashion. Today, the Microsoft® .NET Framework carries on that same task—it represents the inner workings of software in a standardized fashion.
Generically, both COM and .NET expose items as objects. (A software developer might take exception to this simplification, but for the sake of our discussion, a simple term is sufficient.) These objects all have members of various types. For our purposes, the objects’ properties and methods are what we’re most interested in. A property essentially describes an object in some way, or it modifies the object or its behavior. For example, a service object might have a property that contains the service’s name and another property that contains the service’s startup mode. Methods cause an object to take some action. A service object might have methods named Stop, Start, Pause, and Resume, for example, representing the various actions you can take with a service.
From a programming—or scripting—perspective, an object’s members are referred to using a dotted notation. Objects are often assigned to variables, giving you a way to physically manipulate the object. For example, if I have a service assigned to the variable $service, I can stop that service using the syntax $service.Stop. Or I can retrieve the service’s display name by displaying $service.Name.

Objects in the Pipe
Because Windows is a large, complex operating system, and because it doesn’t store its management data in text-style representations, older shell techniques aren’t terribly suitable for it. For example, suppose I have a command-line tool, named SvcList.exe that produces a formatted list of services and their startup modes. In the Windows command line shell—a shell that has its roots firmly in the decades-old MS-DOS shell—I might run something like:
SvcList.exe | MyScript.vbs 
This statement retrieves a list of services and pipes that list to a VBScript file. I’d have to write the VBScript file to parse the formatted list and do whatever I wanted it to do—perhaps output any services with a startup mode of Disabled. This task would be time-consuming. Ultimately, the problem is that SvcList.exe has an output that is unique—it doesn’t share a common format that other commands can easily consume to make use of its output.
Objects, however, can provide that common format and that’s why the Windows PowerShell pipeline works with entire objects, not just text. When you run a cmdlet like Get-WMIObject, you’re producing a group—or collection, in programmer terms—of objects. Each object comes complete with properties and methods that let you manipulate it. If I pipe the objects to the Where-Object cmdlet, I can filter them so that just the objects I want are displayed. Where-Object doesn’t need to parse any text because it isn’t receiving any text—it’s receiving objects. For example:
Get-WMIObject Win32_Service | Where-Object {$_.StartMode -eq “Disabled” }
Or, if you prefer the shorter syntax available through aliases:
gwmi Win32_Service | where {$_.StartMode -eq “Disabled” }
What’s interesting is that Windows PowerShell always passes objects down the pipeline. It isn’t until the end of the pipeline—when there’s nowhere else to pass the objects—that the shell generates a text representation of the objects using their built-in formatting rules. For example, consider this:
Gwmi Win32_Service | where {$_.StartName –eq “LocalSystem” } | select Name,StartMode
This set of three cmdlets retrieves all the services from my local computer, filters out all of them that don’t use the LocalSystem account to log on, and then passes the remainder on to the Select-Object cmdlet, which outputs only the two properties—Name and StartMode—that I’ve asked it to select. The result is a simple report of services that are logging on as LocalSystem (perhaps for security auditing purposes).
Since all of the cmdlets share a common data format—objects—they’re able to share data with one another without any complicated string parsing. And since Windows PowerShell has a native ability to create a text representation of an object, the end of this pipe is sure to be text output that I, a person, can read. Figure 1 shows an example of the output produced.
Figure 1 Text output produced by a series of piped cmdlets that pass along objects 

The Excitement of Pipes
The reason piping in Windows PowerShell is so amazing is that everything in Windows PowerShell is an object, complete with properties and methods that you can use. Even a text file is technically a collection of string objects, with each line in the file acting as a unique and independent string object. For example, create a text file (using Notepad) named C:\Computers.txt. Fill the file with text and then run the following in Windows PowerShell:
Get-Content C:\Computers.txt | Select-Object Length | Format-List
Or, again, if you prefer less typing, you can use aliases:
gc C:\Computers.txt | select Length | fl
This code provides a list that indicates how long each line of your text file is, in characters. Get-Content retrieves the string objects from the file, Select-Object grabs the Length property of each, and then Format-List creates the nice, readable text output for you. While this might not be a practical administrative tool to utilize, it illustrates that even something as simple as a line of text is an object in Windows PowerShell.
Being able to pipe objects from cmdlet to cmdlet—or even from cmdlet to script—enables the creation of incredibly powerful "one liners." These are simple strings of cmdlets attached in a long pipeline that further refine a set of objects to give you exactly what you want. With practically no scripting or programming of any kind, Windows PowerShell cmdlets—strung together in an appropriate pipeline—can achieve remarkable results.

Supporting Future Generations
The fact that future Microsoft server products are also being built on Windows PowerShell extends this functionality. When implementing a new Exchange Server 2007 machine, for example, you can use Windows PowerShell to retrieve all mailboxes, filter out all those not in the office where the new mail server will be located, and then move those mailboxes to the new server—all in a single line of text with no scripting. The Exchange Server 2007 team has published a lengthy list of powerful one-liners. These really demonstrate the power of the pipeline and the administrative tasks it is able to accomplish.
The trick with Windows PowerShell is to understand that, while it’s built on long-established principles and philosophies from the UNIX world, this new tool is uniquely suited for Windows administration. Don’t let any commonality of terminology fool you into thinking that Windows PowerShell is just a UNIX shell knockoff for Windows. Windows PowerShell contains brand new concepts that take advantage of the Windows platform, and it is tightly coupled to the Windows way of doing things.

Don Jones is a Windows PowerShell MVP and author of Windows PowerShell 101 (ScriptingTraining.com). You can contact Don at www.ScriptingAnswers.com.
© 2008 Microsoft Corporation and CMP Media, LLC. All rights reserved; reproduction in part or in whole without permission is prohibited.
Page view tracker