Categories

PowerShell Basics

Output from jobs

I tripped over a little problem the other day that’s worth reporting.  I was running something like this:

 

$sb = {
$procs = get-service
$procs | Export-Csv test.csv -NoTypeInformation
}

Start-Job -ScriptBlock $sb -Name test

 

I was collecting some data and outputting a CSV.  My problem was more complex but this stands as a good example

 

I didn’t get the data I wanted

 

Thinking about it I put the full path to where I wanted the CSV

 

$sb = {
$procs = get-service
$procs | Export-Csv C:\MyData\scripts\Test\test.csv -NoTypeInformation
}

Start-Job -ScriptBlock $sb -Name test

 

And it works.

 

So where did my data go in the original version?

 

I ran this

 

$sb = {
Get-Location

$procs = get-service
$procs | Export-Csv test.csv -NoTypeInformation
}

Start-Job -ScriptBlock $sb -Name test

 

And then pulled the data from the job

 

£> Receive-Job -Id 10

Path
----
C:\Users\Richard\Documents

 

Obvious really – a job runs in a new powershell process that doesn’t run your profile so it starts in the default location  - which is your home directory. And sure enough the CSV file is there

 

£> ls C:\Users\Richard\Documents\*.csv


    Directory: C:\Users\Richard\Documents


Mode                LastWriteTime     Length Name
----                -------------     ------ ----
-a---        14/09/2014     11:50      46042 test.csv

 

I can’t remember how many times I’ve told people that PowerShell jobs run in a separate process so I should have realised.  Excellent example of the more you know the more you need to learn

Grains of rice on a chess board

There is a story about the inventor of chess being rewarded by putting 1 grain of rice on the first square of the board; 2 on the second and so on.  How much rice does that come to?

 

The total number of grains is 1.84467440737096E+19

 

At 25mg per grain thats 461168601842739 kilogrammes of rice

 

Which in more understandable terms is:

 

461,168,601,842.739 metric tonnes

 

or


453,886,749,619.642 tons

 

That’s a lot of rice

 

If you want to play around with the numbers the calculations are:

 

[double]$result = 0

1..64 |
foreach {
 
$square = [math]::Pow(2, ($psitem -1))
$result += $square

}

$wt = 0.025

$totalweight = ($wt * $result)/1000
$totalweight

$mtwt = $totalweight /1000
$mtwt

$tons = $totalweight * 0.00098421
$tons

Event Log Providers

An event log provider is writes to an event log.  I’ve used WMI in the past to get these but while looking for somethign else discovered that Get-WinEvent can also find this information

 

Get-WinEvent -ListProvider * | ft Name, LogLinks -AutoSize –Wrap

 

Provides a nice long list of all of the providers and the event logs they write to.

 

Usually I’m only interested in what’s writing to a particular event log. And that’s where things get a bit more messy.

 

The loglinks are supplied as a System.Collections.Generic.IList[System.Diagnostics.Eventing.Reader.EventLogLink] LogLinks  object that doesn’t play nicely with –in or –contains

 

So we need a bit of PowerShell manipulation to get what we want

 

$log = 'System'

Get-WinEvent -ListProvider * |
foreach {
 
if ($log -in ($psitem | select -ExpandProperty Loglinks | select -ExpandProperty Logname)){
    New-Object -TypeName psobject -Property @{
      Name = $psitem.Name
      Log = $log
    }
}
}

 

The trick here is that the loglinks are a collection of objects so you need to expand them twice to get to the name.  Not pretty but it works

Count property

Its frequently said that PowerShell is so big that no one can know everything about it.  I proved that today when I “discovered” a change in PowerShell of which I wasn’t aware.

 

If you create an array:

£> $a = 1,2,3

You can then get the number of members of that array i.e. its length

 

£> $a.count
3

 

£> $a[0]
1

 

In PowerShell 1.0 and 2.0 if you tried that on a variable that only held a single value you would get an error when you tried to access the first value:

£> $b = 1


£> $b.count

The count property returns nothing

 

£> $b[0]
Unable to index into an object of type System.Int32.
At line:1 char:4
+ $b[ <<<< 0]
    + CategoryInfo          : InvalidOperation: (0:Int32) [], RuntimeException
    + FullyQualifiedErrorId : CannotIndex

 

This changed in PowerShell 3.0 and later

£> $b = 1
£> $b.count
1


£> $b[0]
1

 

You can even try other indices
£> $b[1]
£>

 

And just get nothing back rather than an error.

 

This is really useful as you can now safely test on the Count property and if the value is greater than 1 to determine if its a collection.  Alternatively always treat it as a collection and iterate over the number of elements.  I can see this simplifying things for me in quite a few situations

foreach

I was asked about foreach today and responded with a description of who foreach-object works. Thinking about it I should have realised that part of the issue with foreach is the confusion that arises between foreach and foreach - -  that is the difference between the foreach PowerShell statement and the foreach alias of the foreach-object cmdlet.

 

To unravel the confusion there are two different things referred to as foreach. The confusion is that they do very similar things but are used in different ways.

 

The first is the PowerShell statement which is used to step through each value in a collection of values:

 

$procs = Get-Process

foreach ($proc in $procs) {

New-Object -TypeName PSObject -Property @{
   Name = $proc.Name
   SysMen =  $proc.NonpagedSystemMemorySize + $proc.PagedSystemMemorySize64
}

}

 

You create your collection of objects and then use foreach to step through them. It is convention to make the collection plural and the individual member of the collection its singular.  Within the script block you can define what happens to the object.

 

I know I could have a performed this action is a simpler way but I wanted to demonstrate how foreach works. The simpler way would be:

Get-Process |
select Name,
@{Name = 'SysMen';
Expression = {$_.NonpagedSystemMemorySize + $_.PagedSystemMemorySize64}}

 

Now we’ve got that out of the way what about the other foreach which is the alias of foreach-object.  This can be use to iterate over a collection of objects. The main difference is that the objects are usually piped into foreach:

 

Get-Process |
foreach {

New-Object -TypeName PSObject -Property @{
   Name = $_.Name
   SysMen =  $_.NonpagedSystemMemorySize + $_.PagedSystemMemorySize64
}

}

 

If you don’t like using $_ to represent the object on the pipeline try

Get-Process |
foreach {

New-Object -TypeName PSObject -Property @{
   Name = $psitem.Name
   SysMen =  $psitem.NonpagedSystemMemorySize + $psitem.PagedSystemMemorySize64
}

}

 

which is exactly equivalent to

Get-Process |
ForEach-Object {

New-Object -TypeName PSObject -Property @{
   Name = $psitem.Name
   SysMen =  $psitem.NonpagedSystemMemorySize + $psitem.PagedSystemMemorySize64
}

}

 

Using the cmdlet or its alias you can set up script blocks to process once when the first object reaches foreach (BEGIN), once per object on the pipeline (PROCESS) and once when the last object has been processed (END)

Get-Process |
ForEach-Object `
-BEGIN {
  Write-Host "First object about to be processed"
} `
-PROCESS {
New-Object -TypeName PSObject -Property @{
   Name = $psitem.Name
   SysMen =  $psitem.NonpagedSystemMemorySize + $psitem.PagedSystemMemorySize64
}
}`
-END {
Write-Host "Last object processed"
}

 

Your ouput looks like this

 

First object about to be processed

Name                                                                      SysMen
----                                                                      ------
armsvc                                                                    164096
concentr                                                                  200400
conhost                                                                   119104
csrss                                                                     153664
csrss                                                                     407760
           <truncated>

WUDFHost                                                                  103696
WWAHost                                                                   778816
WWAHost                                                                   785120
Yammer.Notifier                                                           566304
Last object processed

 

More info is available in the help files for foreach-object and about_foreach

Can it -whatif

One of the nice things about PowerShell is that it can help you prevent mistakes. Many of the cmdlets that make changes to you system have a –whatif parameter that allows you to test your actions:

 

£> Get-Process | Stop-Process -WhatIf
What if: Performing the operation "Stop-Process" on target "armsvc (1564)".
What if: Performing the operation "Stop-Process" on target "audiodg (3004)".
What if: Performing the operation "Stop-Process" on target "concentr (7080)".
What if: Performing the operation "Stop-Process" on target "conhost (3628)".

 

etc

 

 

The –whatif parameter is only present on cmdlets that make changes and then only if the team writing the cmdlet implemented it – they should but you can’t guarantee it happened. So how can you find out which cmdlets implement –whatif?

 

Use Get-Command

 

Compare these 2 commands.

 

£> Get-Command -Module CimCmdlets | select Name

Name
----
Export-BinaryMiLog
Get-CimAssociatedInstance
Get-CimClass
Get-CimInstance
Get-CimSession
Import-BinaryMiLog
Invoke-CimMethod
New-CimInstance
New-CimSession
New-CimSessionOption
Register-CimIndicationEvent
Remove-CimInstance
Remove-CimSession
Set-CimInstance

 

shows the cmdlets in a module

 

£> Get-Command -Module CimCmdlets -ParameterName Whatif | select Name

Name
----
Invoke-CimMethod
New-CimInstance
Remove-CimInstance
Remove-CimSession
Set-CimInstance

 

Now you can test a module to see which cmdlets have –whatif enabled.  You can also test at just the cmdlet level:

£> Get-Command *process -ParameterName Whatif  -CommandType cmdlet | select Name

Name
----
Debug-Process
Stop-Process


£> Get-Command *wmi* -ParameterName Whatif  -CommandType cmdlet | select Name

Name
----
Invoke-WmiMethod
Remove-WmiObject
Set-WmiInstance

Select-Object or Where-Object

Both Select-Object and Where-Object (referred to by their aliases of select and where from now on) are both used to filter data.

 

It is important to know the way these 2 cmdlets are used.

 

Where is used to restrict the objects on the pipeline to those where one or more properties satisfy the filter criteria e.g.

Get-Process | where CPU -gt 20

 

You get a reminder of this if you you use the full syntax

Get-Process | where -FilterScript {$_.CPU -gt 20}

 

As a matter of style you very rarely see anyone using the parameter name –FilterScript.

 

Select is used to cut the number of properties on an object to just those you want to work with e.g.

Get-Process | select Name, Id, CPU

 

If you want just those properties for the processes where CPU time is greater than 20 seconds you need to combine them on the pipeline:

Get-Process | where CPU -gt 20 | select Name, Id, CPU

 

There isn’t a way to embed a where type filter in a select or vice versa. Keep it simple. Use the pipeline and let the cmdlets do the job for which they were designed.

Invoke-Item tips

Invoke-Item is another cmdlet that you don’t see used that often but there is one place where its invaluable – opening files. If you give Invoke-Item the path to a file

Invoke-Item -Path .\procs.txt

 

The file will be opened with the default application associated with that extension. In this case Notepad.

 

If you use a PowerShell script file

Invoke-Item .\t1.ps1

 

It will be opened in Notepad for you to examine.  CSV files automatically open in Excel and other Office files are opened in the correct application.

 

Using the alias ii for Invoke-item makes it even easier.

and finally

If you’re old enough and seen UK TV you’ll recognise the title but this post is about using try – catch blocks.

Using try-catch this is a fairly normal construction

try {
Get-CimInstance -ClassName Win32_LogicalDisk  -ErrorAction Stop
}
catch {
Throw "something went wrong"
}

The number of commands within the try block should be minimised so that you stand a chance of knowing what you are going to catch.

if you introduce an error – for instance using the wrong classname

£> try {
Get-CimInstance -ClassName Win32_LogicalDrive  -ErrorAction Stop
}
catch {
Throw "something went wrong"
}
something went wrong
At line:5 char:2
+  Throw "something went wrong"
+  ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : OperationStopped: (something went wrong:String) [], RuntimeException
    + FullyQualifiedErrorId : something went wrong

 

You can control what happens and even recover.

I don’t see the third element of a try-catch block being used much – that’s the finally block. The finally block will run irrespectively of whether the try block works or not. It’s your clean up block

Consider

£> try {
$cs = New-CimSession -ComputerName $env:COMPUTERNAME
Get-CimInstance -ClassName Win32_LogicalDrive  -CimSession $cs -ErrorAction Stop
}
catch {
Throw "something went wrong"
}
something went wrong
At line:6 char:2
+  Throw "something went wrong"
+  ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : OperationStopped: (something went wrong:String) [], RuntimeException
    + FullyQualifiedErrorId : something went wrong
 

£> Get-CimSession


Id           : 1
Name         : CimSession1
InstanceId   : 68a4f534-d222-4e52-b6f9-ba65b1d57b3b
ComputerName : RSSURFACEPRO2
Protocol     : WSMAN

 

This try block fails but it leaves a CIM session hanging about. Not good practice. I know it will get cleaned up when you close the powershell session but that could be hours away. Better to clean up now.

£> try {
$cs = New-CimSession -ComputerName $env:COMPUTERNAME
Get-CimInstance -ClassName Win32_LogicalDrive  -CimSession $cs -ErrorAction Stop
}
catch {
Throw "something went wrong"
}
finally {
Remove-CimSession -CimSession $cs
}

Get-CimSession
something went wrong
At line:6 char:2
+  Throw "something went wrong"
+  ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : OperationStopped: (something went wrong:String) [], RuntimeException
    + FullyQualifiedErrorId : something went wrong
 

no sign of a CIM session even if you try specifically.

 

£> Get-CimSession

£>

if you correct the code

try {
$cs = New-CimSession -ComputerName $env:COMPUTERNAME
Get-CimInstance -ClassName Win32_LogicalDisk  -CimSession $cs -ErrorAction Stop
}
catch {
Throw "something went wrong"
}
finally {
Remove-CimSession -CimSession $cs
}

Get-CimSession

 

You’ll find the the CIM session is still removed. A finally block will always run

When you are using try blocks think about your environment and use a finally block to clean up.

Bad practices – making scripts needlessly interactive

the PowerShell community spends a lot of time talking about best practices when using PowerShell. I’m not convinced this approach is working as we keep seeing the same bad practices coming through on forum questions. I thought I’d turn the problem on its head and present a now-and-again series of bad practices. These are things I’ve seen people doing that either make their script far more complicated than it needs to be or just completely negates the power that PowerShell brings to your  daily admin tasks.

I think that a lot of this is due to people not taking the time to learn the best way to use PowerShell. The thinking goes something like this  - I can script with XXX language.  PowerShell is a scripting language. Therefore I know how to use PowerShell.

NO. WRONG.

PowerShell should be thought of more as an automation engine then a scripting language. The reason for using PowerShell is to perform tasks that are one or more of repetitive, boring, long, complicated, error prone, can’t be done in the GUI. The goal of automating a task should be that you can run it automatically without human intervention if you need to.

I’ll give an example.

Some years ago I was involved in migrating a new customer into the managed service environment of the company I was working for. Step one was to move them out of their previous supplier’s data centre. As part of that move I wrote a PowerShell script that shutdown every machine in that environment. The script took a list of computers from a file and shut them down in order. My final action was to shutdown the machine I’d used to run the script.

I actually ran the script manually but I could have run it as a scheduled job – which I did another time.

What do I mean by making scripts needlessly interactive?

Consider this example:

 

$hs = @"
Press 1 to reboot computera
Press 2 to reboot compuertb
Press 3 to reboot computera and computerb

"@

$x = Read-Host -Prompt $hs

switch ($x) {
1  {Restart-Computer -ComputerName $computera; break}
2  {Restart-Computer -ComputerName $computerb; break}
3  {
     Restart-Computer -ComputerName $computera
     Restart-Computer -ComputerName $computerb
      break
   }
}

 

The script defines the prompt – a menu in effect. Gets the user choice and performs the required reboots. It works BUT its so wasteful in terms of the effort required.  Even worse its not sustainable. When you add another computer your choices become a, b, c, a+b, a+c, b+c, a+b+c.  Now think what happens as your list of possible computers grows to 5 or 10 or 100!

Every time you find yourself typing Read-Host sop and ask yourself if its really needed. I’m not suggesting that cute little animals will die if you use it but you will cause yourself unnecessary work – now and in the future.

So what should you do?

Use a parameterised script.

 

[CmdletBinding()]
param (
[string[]]$computername
)

foreach ($computer in $computername){
  Restart-Computer -ComputerName $computer
}

 

call the script reboot.ps1 for sake of arguments.

You can then use it like this

./reboot –computername computerA
./reboot –computername computerb

./reboot –computername computerA, computerb, computerc

 

if you have a lot of machines to reboot

./reboot –computername (get-content computers.txt)

put the names in a file (one per line) and call the script as above. You could even have different files for different groups of computers if required

Less typing and easier to use and maintain.

Now you’re using the power that PowerShell brings you.