Skip to main content
TechNet

Windows PowerShell: The Windows PowerShell Workflow environment

Each month this year, Don Jones will present an installment in a 12-part tutorial on Windows PowerShell Workflow. We encourage you to read through the series in order, beginning with the January 2013 column.

Don Jones

As I explained last month, Windows PowerShell Workflow is a new feature in version 3 of the Windows Management Framework. It’s preinstalled on Windows Server 2012 and Windows 8. It’s also available for Windows 7, Windows Server 2008 and Windows Server 2008 R2. You’ll need one of those OSes to run a workflow. You can also have a workflow target—that is, perform tasks against—any version of Windows, depending on the task you’re trying to accomplish.

There are two main prerequisites to using Windows PowerShell Workflow, aside from the standard system requirements for Windows PowerShell version 3. First, you’ll need to use the bundled PSWorkflow module to enable workflow features inside the shell. Second, you’ll need to have Windows PowerShell Remoting enabled on the machines you plan to target with Windows PowerShell Workflow.

Remoting doesn’t necessarily need to be running on the machine where you execute workflows, but whichever machines you plan to inventory, configure or otherwise manage will need to have Remoting enabled. Windows PowerShell Workflow uses Remoting as its primary communications protocol. There are probably some workflows you could write that wouldn’t need Remoting, but those are few and far between.

Remoting is a vastly misunderstood component of Windows PowerShell. More than a few organizations’ IT security folks take a do-not-deploy approach. That’s misguided and unfortunate. Those same security-conscious folks have no problem allowing a large number of other management communications protocols such as Remote Desktop Protocol (RDP), remote procedure calls (RPCs) and HTTP.

Remoting is the way forward for management communications within a Windows environment. Microsoft will soon begin consolidating those older protocols into Remoting. Remoting uses HTTP and HTTPS, is highly configurable and manageable, and you can centrally lock it down with a Group Policy Object (GPO). It’s also generally more secure, more auditable and more controllable than those older protocols. For a more complete discussion of the security implications of Remoting and its architecture, consider downloading the free e-book, “ Secrets of PowerShell Remoting.”

Inside a workflow

The biggest challenge you’ll have when approaching Windows PowerShell Workflow is to remember that it looks like a Windows PowerShell script (specifically, a function), but it isn’t. The “Windows PowerShell language” you write is translated into an external language (XAML). Then it’s executed by non-Windows PowerShell technology.

As a result, there are a few different rules. Some of those are purely syntactical. Generally speaking, you’ll need to use full cmdlet names for cmdlets you run. Also, positional parameters and truncated parameter names aren’t allowed. You must use full parameter names.

What can be more confusing is the overall environment within a workflow. The whole point of a workflow is that you can interrupt and resume the workflow deliberately or do so in reaction to something going wrong in your environment (such as a loss of power).

That means each command in a workflow must essentially stand alone. It must have no awareness of what other commands have done, and no native means of persisting data between commands. It also means you can’t set a variable to hold the output of one command and then pass that variable as input to another command. It just won’t work. Also, you can’t load modules containing additional commands. There’s no persistent environment in which to load the module.

The special InlineScript{} block that can go into a workflow now becomes your best friend. Each InlineScript is basically run as a single unit. Windows Workflow Foundation (WWF) spins up a copy of Windows PowerShell, jams in the entire InlineScript, and it runs. The contents of an InlineScript behave like a traditional Windows PowerShell script.

There’s also an implicit InlineScript. WWF can’t actually run Windows PowerShell commands outside of an InlineScript. When you use a Windows PowerShell cmdlet in your workflow, Windows PowerShell checks to see if a native WWF equivalent is available, and tells WWF to run that instead.

The Windows PowerShell team has written WWF equivalents for many native Windows PowerShell commands, but when you try to use a cmdlet that doesn’t have a native WWF version, Windows PowerShell implicitly wraps your command in an InlineScript block. This causes WWF to launch Windows PowerShell, run your command, then close that instance of Windows PowerShell.

This lack of persistence between commands is what helps Windows PowerShell Workflow make your scripts resumable. This is a big part of why Windows PowerShell Workflow can parallelize commands within a script. It turns them into multithreaded automation machines. However, the lack of persistence is a major departure from how normal Windows PowerShell scripts and functions work. It takes a lot of additional planning to make a script work properly.

It won’t be unusual to see workflows that consist of a bunch of InlineScript blocks. Nor will it be unusual to see workflows that are nothing more than a single, long InlineScript block.

Such workflows will have limited (or nonexistent) parallelization or resume capabilities. An InlineScript is a single unit of work. You might simply choose to run such scripts as normal functions, using Remoting rather than workflows. You’d be getting very few of the actual benefits of Windows PowerShell Workflow anyway.

Keep these handy

One thing Windows PowerShell Workflow is distinctly lacking is a means of persisting a workflow’s data between commands. That would make it useful in a broader range of circumstances. If a workflow were interrupted and resumed, commands could easily recreate state information such as IP addresses, computer names, user names or whatever else, and continue running.

Given this native shortcoming, it’s something you’ll probably want to create for yourself. Set up a SQL Server instance (even a free Express instance) and create database tables in which your workflows can save data. A workflow will need some means of uniquely identifying itself, such as the workflow name and the computer name each workflow instance is targeting.

Your workflow might then consist of a number of discrete InlineScript blocks. Each one will read the current state from the database, perform some task and, if necessary, update the database with new information. It’s a shame that Windows PowerShell Workflow features can’t help automate this. Perhaps a “$workflow:variable” architecture that uses XML files or SQL Server under the hood could help, but that’s something for a future version.

The good news is that it’s pretty easy to “roll your own” persistence using SQL Server. SQL Server is preferable to XML files because it’s more scalable, you can access it more easily from multiple network locations, and it supports better concurrent access from multiple instances at once.

The Microsoft .NET Framework System.Data.Sql.SqlClient class is easy to use. There are examples in my “ Learn PowerShell Toolmaking in a Month of Lunches” e-book. In fact, my coauthor and I created an entire module you can repurpose and download for free as part of the book’s sample scripts.

With these preliminaries out of the way, we’re ready to start examining what a basic workflow looks like. That will be the subject of next month’s column.

Don Jones

Don Jones is a Windows PowerShell MVP Award winner and contributing editor to TechNet Magazine. He has coauthored four books about Windows PowerShell version 3, including several free titles on creating HTML reports in Windows PowerShell and Windows PowerShell Remoting. Find them all at PowerShellBooks.com, or ask Jones your question in the discussion forums at PowerShell.org.

Related Content