Categories

16287

CIM or WMI – – accessing remote machines

I much prefer the CIM cmdlets for accessing remote machines. The WMI cmdlets use DCOM which is firewall unfriendly and can often be unavailable of a server – cue the dreaded RPC server is unavailable error messages.

By contrast the CIM cmdlets use WSMAN.

For one off access to a remote machine use the computername parameter

Get-CimInstance -ClassName Win32_OperatingSystem -ComputerName RSSURFACEPRO2

 

If you want to access a machine multiple times in the session create a CIM session – analagous to a remoting session

 

$cs = New-CimSession -ComputerName RSSURFACEPRO2
Get-CimInstance -ClassName Win32_OperatingSystem -CimSession $cs

 

By default a CIM session uses WSMAN

£> $cs


Id           : 1
Name         : CimSession1
InstanceId   : 30c2b530-4ff7-448e-b68d-1f1282890e6a
ComputerName : RSSURFACEPRO2
Protocol     : WSMAN

 

though you can configure them to use DCOM if need be

$opt = New-CimSessionOption -Protocol DCOM
$csd = New-CimSession -ComputerName RSSURFACEPRO2 -SessionOption $opt
Get-CimInstance -ClassName Win32_OperatingSystem -CimSession $csd

 

When would you need to use DCOM – if you are accessing a machine with PowerShell 2 installed. The CIM cmdlets want to use WSMAN 3 and will error if you access a machine with WSMAN 2 installed however if you include a –Filter they will work

So

Get-CimInstance -ClassName Win32_OperatingSystem -ComputerName computer1

will fail if computer1 is running WSMAN 2 (PowerShell 2)

However, if you change the command to include a filter

Get-CimInstance -ClassName Win32_OperatingSystem -Filter "Manufacturer LIKE 'Microsoft%'" -ComputerName computer1

 

Even if, as in this case, the filter doesn’t actually do anything

Finding a CIM class

I was investigating something on my disks and started to look at the partitions:

£> Get-CimInstance -ClassName Win32_Partition
Get-CimInstance : Invalid class
At line:1 char:1
+ Get-CimInstance -ClassName Win32_Partition
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : MetadataError: (root\cimv2:Win32_Partition:String) [Get-CimInstance], CimException
    + FullyQualifiedErrorId : HRESULT 0x80041010,Microsoft.Management.Infrastructure.CimCmdlets.GetCimInstanceCommand

OK so my memory isn’t what it was and I got the class name wrong. How to find the correct class?

£> Get-CimClass *Partition*


   NameSpace: ROOT/cimv2

CimClassName
------------
CIM_DiskPartition
Win32_DiskPartition
Win32_SystemPartitions
CIM_LogicalDiskBasedOnPartition
Win32_LogicalDiskToPartition
CIM_RealizesDiskPartition
Win32_DiskDriveToDiskPartition
Win32_PerfFormattedData_HvStats_...
Win32_PerfRawData_HvStats_HyperV...
Win32_PerfFormattedData_HvStats_...
Win32_PerfRawData_HvStats_HyperV...
Win32_PerfFormattedData_VidPerfP...
Win32_PerfRawData_VidPerfProvide...

 

I’ve truncated the display horizontally as not interested in methods & properties at this point


So the class I want is Win32_DiskPartition.

Get-CimClass is one of the biggest benefits from PowerShell 3.0

CIM or WMI – – using methods

The CIM and WMI cmdlets both provide a way to use the methods on CIM classes namely Invoke-CimMethod and Invoke-WmiMethod. The cmdlets are very similar in operation.

$vol = Get-WmiObject -Class Win32_Volume -Filter "DriveLetter = 'D:'"

Invoke-WmiMethod -InputObject $vol -Name Chkdsk -ArgumentList $false, $true, $true, $false, $false, $false

 

The argumenst list isn’t very informative – unless you know the class, read the documentation or investigate with Get-CimClass

 

Using the CIM cmdlets is a bit more informative as to what is going on.

$vol = Get-CimInstance -ClassName Win32_Volume -Filter "DriveLetter = 'D:'"

Invoke-CimMethod -InputObject $vol -MethodName Chkdsk -Arguments @{FixErrors=$false; ForceDismount=$false; OkToRunAtB
ootUp = $false; RecoverBadSectors = $false; SkipFolderCycle = $true; VigorousIndexCheck = $true}

 

You present the arguments as a hash table – this means you can create the hash table and pass it to the method

$margs = @{
FixErrors=$false
ForceDismount=$false
OkToRunAtBootUp = $false
RecoverBadSectors = $false
SkipFolderCycle = $true
VigorousIndexCheck = $true
}
Invoke-CimMethod -InputObject $vol -MethodName Chkdsk -Arguments $margs

 

This also means that you can create a default set of values and manipulate them in your scripts very easily

 

Using Invoke-CimMethod involves more typing but I think that’s worth it for the clarity. Of course if you are going to be using the methods of class a lot then I’d recommend that you create a CDXML module from the class – but that’s a whole different set of articles.

Workflows 7: checkpointing workflows

Consider this workflow

workflow chkpt1 {

Get-Process

foreach ($x in 1..20){
$x
}

}

 

It will dump out the process information then output the numbers 1 to 20.  Not a particularly enlightening workflow but it forms  a nice basis for demonstrating checkpoints.

A checkpoint saves the state and data in the workflow. If the workflow is suspended or interrupted the work flow can be restarted from the most recent checkpoint rather than a complete restart from the beginning.

 

Change the workflow to give a long running activity

workflow chkpt1 {

Get-Process


foreach ($x in 1..1000){
$x
}

}

Let the process data be displayed and then stop execution once the numbers start displaying – this is easiest if you use ISE. If you want to restart the workflow you have to start it right from the beginning.

Add a checkpoint to the workflow. This can be achieved in a number of ways.

Use the Checkpoint-Workflow activity to the workflow.

workflow chkpt1 {

Get-Process
Checkpoint-Workflow

foreach ($x in 1..1000){
$x
}

}

 

or use the –PSPersist parameter

workflow chkpt1 {

Get-Process -PSPersist


foreach ($x in 1..1000){
$x
}

}

 

I prefer to use Checkpoint-Workflow as it is more obvious to me when I review the workflow.

 

if you want to checkpoint your workflows – you have to start them as a job

 

chkpt1 –AsJob

 

Then shut down ISE

Open another PowerShell session with elevated privileges. Use Get-Job to see the suspended job.

View the data in the job

Receive-Job -Id 5 –Keep

 

Restart the job

Resume-Job -Id 5

 

Once the job finishes view the data and you’ll see the process data and the list of numbers.

Use Checkpoint-Workflow as many time as necessary to protect your data in long running workflows. A checkpoint is a good idea any time that it would be expensive to restart the whole process.

Workflows 6: suspending jobs

One of the great things about work flows is that you can stop and start them. A workflow can be stopped, on a temporary basis, by using the Suspend-Workflow activity.

workflow suspend1 {
Get-Service

Suspend-Workflow

Get-Process

}
suspend1

 

This will run the Get-Service activity – and produce output to the console. The workflow will suspend and automatically create a job. You will see output like this:

HasMoreData     : True
StatusMessage   :
Location        : localhost
StartParameters : {}
Command         : suspend1
JobStateInfo    : Suspended
Finished        : System.Threading.ManualResetEvent
InstanceId      : bbe0903d-1720-46da-a6dd-e0a927aa9e11
Id              : 8
Name            : Job8
ChildJobs       : {Job9}
PSBeginTime     : 30/06/2014 19:40:29
PSEndTime       : 30/06/2014 19:40:29
PSJobTypeName   : PSWorkflowJob
Output          : {}
Error           : {}
Progress        : {}
Verbose         : {}
Debug           : {}
Warning         : {}
State           : Suspended

 

Notice the state.

You can  manage the job with the standard job cmdlets

£> Get-Job

Id Name PSJobTypeName State     HasMoreData Location  Command
-- ---- ------------- -----     ----------- --------  -------
8  Job8 PSWorkflowJob Suspended True        localhost suspend1

 

The job is restarted using Resume-Job. Once the job has finished you can use Receive-Job to get the rest of the data.

Workflows: 5a CDXML modules update

In my last post I questioned why commands from CDXML didn’t fail as thee weren’t any activities defined for them.  Turns out that functions and other commands that don't explicitly have their own workflow activities are implicitly wrapped in the inline script activity.  As CDXML modules effectively create a function – that means that workflows accept them and run.

Thanks to Steve Murawski for the answer.

Workflows: 5 CDXML modules

Last time we saw that you’re not really using cmdlets in PowerShell workflows – you’re using workflow activities. Some cmdlets haven’t been packaged into activities and for those you need to put them in an Inlinescript block.  You can also use an Inlinescript block to run any arbitrary piece of PowerShell.

One thing I hadn’t tried was using some of the CDXML (WMI class wrapped in XML and published as a PowerShell module) modules that ship in Windows 8 and later. As far as I was aware they hadn’t been packaged as activities -

so thought I’d try this

workflow net1 {
parallel {
  Get-NetAdapter
  Get-NetIPAddress
}
}
net1

 

Surprisingly it worked.

The only reason I can think of that it works is that the way CDXML publishes the cmdlets as functions -  ls function:\Get-NetAdapter | fl *

Enables a workflow to use the function. Even if the module containing the cmdlet hasn’t been explicitly or implicitly loaded the workflow still runs.

You can tell these cmdlets haven’t been packaged as activities as the activity common parameters aren’t available on them.

One of life’s little mysteries. I’ll get to the bottom of it and find out why eventually. In the meantime our workflows just became a whole lot richer in terms of functionality.

Workflows: 4 Using cmdlets

This is a simple function to return some data about a system:

function get-serverdata {

Get-CimInstance -ClassName Win32_OperatingSystem

Get-Service

Get-Process


}

get-serverdata

 

The function will return the CIM data about the operating system, then the service data and then the process data. The only difference to running the cmdlets interactively is that the display for services and processes defaults to a list rather than a table.

Status      : Stopped
Name        : WwanSvc
DisplayName : WWAN AutoConfig


Id      : 1524
Handles : 81
CPU     : 0.015625
Name    : armsvc

 

Now lets try that as a workflow:

 

workflow get-serverdata {

Get-CimInstance -ClassName Win32_OperatingSystem

Get-Service

Get-Process


}

get-serverdata

 

You’ll see the data returned in exactly the same order – operating system, services and processses. The only difference is that a PSComputerName property is added to the output.

Status         : Stopped
Name           : WwanSvc
DisplayName    : WWAN AutoConfig
PSComputerName : localhost


Id             : 1524
Handles        : 81
CPU            : 0.015625
Name           : armsvc
PSComputerName : localhost

 

I didn’t emphasis this when discussing parallel and sequential processing earlier but the default action for a workflow is to process the commands sequentially. You use the parallel and sequence keywords to control that processing – for instance to run the cmdlets in parallel:

workflow get-serverdata {

parallel {
Get-CimInstance -ClassName Win32_OperatingSystem

Get-Service

Get-Process
}

}

get-serverdata

 

From this I see a block of service data (in table format) followed by the first process, another service, the CIM data and more services then mixed services and process data until then end.

I can’t emphasise this point enough – when running workflow tasks in parallel you have no control of the order in which data is returned.

 

You may be tempted to try something like this:

workflow get-serverdata {

parallel {
Get-CimInstance -ClassName Win32_OperatingSystem

Get-Service | Format-Table

Get-Process | Format-Table
}

}

get-serverdata

 

Don’t. You see an error message:

At line:6 char:15
+ Get-Service | Format-Table
+               ~~~~~~~~~~~~
Cannot call the 'Format-Table' command. Other commands from this module have been packaged as workflow activities, but this command was specifically excluded. This is likely because the command requires an interactive Windows PowerShell session, or has behavior not suited for workflows. To run this command anyway, place it within an inline-script (InlineScript { Format-Table }) where it will be invoked in isolation.
    + CategoryInfo          : ParserError: (:) [], ParseException
    + FullyQualifiedErrorId : CommandActivityExcluded

 

There are a couple of very important pieces of information in this message:

“packaged as workflow activities”

The commands used in the workflows in this post ARE NOT CMDLETS. I’ve capitalised that to emphasis. Workflows do not run PowerShell cmdlets – they use workflow activities. These are cmdlets repackaged to run in workflows. Not all cmdlets have bee repackaged as activities and not all cmdlets can be which leads to the second important point from the error.

“To run this command anyway, place it within an inline-script (InlineScript { Format-Table }) where it will be invoked in isolation. “

 

Any cmdlet or command that hasn’t been specifically converted to a workflow activity can be wrapped in an inlinescript block if you need to run it in a workflow. The example would become:

workflow get-serverdata {

parallel {
Get-CimInstance -ClassName Win32_OperatingSystem

inlinescript {
Get-Service | Format-Table

Get-Process | Format-Table
}

}

}

get-serverdata

 

You can see a list of the cmdlets that have not been implemented as workflow activities in the this link http://technet.microsoft.com/en-us/library/jj574194.aspx

Scroll down to excluded cmdlets.

 

PowerShell workflows look like PowerShell but they are not PowerShell. Next time we’ll dig a bit further under the covers of PowerShell workflows.

Workflows: 3 parallel and sequence

I said in the first post in this series that you could force a workflow to perform tasks in parallel or in sequence. Starting with parallel you can force parallel execution by using the parallel keyword:

workflow thursday1 {
parallel {
   1..26 | foreach {$psitem}
   65..90 | foreach {[char][byte]$psitem}
   97..122 | foreach {[char][byte]$psitem}

   1..26 | foreach {$psitem}
   65..90 | foreach {[char][byte]$psitem}
   97..122 | foreach {[char][byte]$psitem}

   1..26 | foreach {$psitem}
   65..90 | foreach {[char][byte]$psitem}
   97..122 | foreach {[char][byte]$psitem}
}
}

thursday1

In this workflow I’m printing out the numbers 1-12, the characters A-Z and a-z then repeating those actions another 2 times. The repetition is to force the parallelism to be visible. If you just run one repetition everything runs so fast that you don’t see any evidence of parallel activity.  if you look carefully at the output you will see evidence of parallelism. For instance the test I ran showed this partial sequence of results:

Y
16
17
B
18
C
19
20
D
a
b

No prizes for guessing that the keyword to force things to run sequential is sequence

workflow thursday2 {
sequence {
   1..26 | foreach {$psitem}
   65..90 | foreach {[char][byte]$psitem}
   97..122 | foreach {[char][byte]$psitem}
}
}

thursday2

No matter how many times you run this you’ll always get the same sequence of results – numbers, upper case then lower case characters.

You can mix and match

workflow thursday3 {
parallel {
    sequence {
     1..26 | foreach {$psitem}
     65..90 | foreach {[char][byte]$psitem}
     97..122 | foreach {[char][byte]$psitem}
   }
    sequence {
     1..26 | foreach {$psitem}
     65..90 | foreach {[char][byte]$psitem}
     97..122 | foreach {[char][byte]$psitem}
   }
    sequence {
     1..26 | foreach {$psitem}
     65..90 | foreach {[char][byte]$psitem}
     97..122 | foreach {[char][byte]$psitem}
   }
}
}

This runs the sequence of numbers, upper and lower case three times in parallel

or

workflow thursday4 {
sequence {
    parallel {
     1..26 | foreach {$psitem}
     65..90 | foreach {[char][byte]$psitem}
     97..122 | foreach {[char][byte]$psitem}
   }
    parallel {
     1..26 | foreach {$psitem}
     65..90 | foreach {[char][byte]$psitem}
     97..122 | foreach {[char][byte]$psitem}
   }
    parallel {
     1..26 | foreach {$psitem}
     65..90 | foreach {[char][byte]$psitem}
     97..122 | foreach {[char][byte]$psitem}
   }
}
}

thursday4

 

which runs numbers, upper and lower case in parallel – three times in sequence

Try thursday3 and thursday4 and observe the results. I’d encourage you to experiment with combinations of sequence and parallel so that you get a feel for the difference between the two types of action.

Your observations should help reinforce the message from the first post – you can’t predict the order of results when performing tasks in parallel.

Workflows: 2 Additional reading

If you’re really interested in using workflows you may find this series of articles I did for the Scripting Guy useful:

1. Basics - introduce workflows, key concepts and keywords
http://blogs.technet.com/b/heyscriptingguy/archive/2012/12/26/powershell-workflows-the-basics.aspx

2. Restrictions - cmdlets not available as workflow activities, using inlinescript, using variables
http://blogs.technet.com/b/heyscriptingguy/archive/2013/01/02/powershell-workflows-restrictions.aspx

3. Nesting work flows and functions
http://blogs.technet.com/b/heyscriptingguy/archive/2013/01/09/powershell-workflows-nesting.aspx

4. Workflows and the PowerShell job engine - suspending and resuming; checkpointing and recovery
http://blogs.technet.com/b/heyscriptingguy/archive/2013/01/16/powershell-workflows-job-engine.aspx

5. Workflows and computer restarts
http://blogs.technet.com/b/heyscriptingguy/archive/2013/01/23/powershell-workflows-restarting-the-computer.aspx

6. Workflow parameters - where's the best place to put the computername?
http://blogs.technet.com/b/heyscriptingguy/archive/2013/01/30/powershell-workflows-using-parameters.aspx

7. Considerations for designing workfows; when to use workflows vs jobs
http://blogs.technet.com/b/heyscriptingguy/archive/2013/02/06/powershell-workflows-design-considerations.aspx

8. A practical example
http://blogs.technet.com/b/heyscriptingguy/archive/2013/02/13/powershell-workflows-a-practical-example.aspx