Appendix 5: Scriptable Power Control Tools

Applies To: Windows HPC Server 2008

The cluster administration console (HPC Cluster Manager) includes actions to start, shut down, and restart compute nodes remotely: Start, Reboot, and Shut Down in the Actions pane in Node Management. These actions are linked to the CcpPower.cmd script, which performs these power control operations, with the exception of the start action that is not enabled, using operating system commands.

You can replace the default operating system commands in CcpPower.cmd with custom power control scripts, like Intelligent Platform Management Interface (IPMI) scripts.

CcpPower.cmd is available in the Bin folder of the installation path for HPC PackĀ 2008. For example, if you are using the default installation path, the file is available here:

C:\Program Files\Microsoft HPC Pack\Bin\CcpPower.cmd

The default CcpPower.cmd file has the following code:

@setlocal
@echo off
if L%1 == Lon goto on
if L%1 == Loff goto off
if L%1 == Lcycle goto cycle
echo "usage:CcpPower.cmd [on|off|cycle] nodename [ipaddress]"
goto done

:on
exit /b 1
goto done

:off
shutdown /s /t 0 /f /m \\%2
goto done

:cycle
shutdown /r /t 0 /f /m \\%2
goto done

:done
exit /b %ERRORLEVEL%
endlocal

To enable scriptable power control tools for the Shut Down and Reboot actions in HPC Cluster Manager, replace the entries of the shutdown command in CcpPower.cmd with the name and path of your tool or tools for shutting down and restarting the node. To enable tools for the Start action, replace the exit entry in the :on section with the name and path of your tool for this action.

Also, you must associate a management IP address with each compute node in the cluster (for example, the IP address for the Base Management Controller (BMC) of the compute node). The management IP address is the third string (%3) that is passed to the CcpPower.cmd script by HPC Cluster Manager, and should be provided to your power control tools when you add them in CcpPower.cmd. A management IP address can be associated with each compute node in the cluster in the following ways: