Monthly Archives: March 2010

Scripting Games 2010

Don’t expect to be anywhere but in front of your computer between 26 April and 7 May.  Why? because the 2010 Scripting Games are happening.

The usual 10 events with beginners and advanced versions.

A whole bunch of expert commentators – including me.

 

For more information and a few hints on things that could be useful see http://blogs.technet.com/heyscriptingguy/archive/2010/03/21/hey-scripting-guy-march-21-2010.aspx

Book Review: Windows PowerShell 2.0 Best Practices

Author: Ed Wilson and Windows PowerShell Teams at Microsoft

Publisher: Microsoft Press

ISBN: 978-0-7356-2646-1

I have three main criteria for judging a book:

  • Is it technically accurate?
  • Does deliver the material it claims to deliver?
  • Is worth the cost of purchase and the time I spend reading it?

Before diving into the review I have to point out a vested interest in the book in that I was one of people asked to contribute a sidebar to the book. There are a number of contributions from Microsoft PowerShell Team members; Microsoft consultants and PowerShell MVPs. Don’t skip the side bars when reading the book as they contain a lot of interesting information.

The book arrives with a large thud on the door mat as weighs in with a mighty 715 pages by the time the index is counted. These pages contain 17 chapters in five parts:

  1. Introduction – covers the Scripting Environment; PowerShell capabilities; AD capabilities and User Management
  2. Planning – covers Scripting Opportunities; Configuring Scripting Environment; Avoiding Pitfalls; Tracking Scripting Opportunities
  3. Designing – covers Functions; Help; Modules; Input and Output; Errors
  4. Testing and Deploying – covers Testing Scripts; Running Scripts
  5. Optimizing – covers Logging Results; Troubleshooting Scripts

This is a big book and a reader could legitimately expect it to be packed with best practices. There are a lot of places in the book where the words “best practice” are used but it is not easy to find a specific recommendation if, for instance, you wanted to find the recommended way to deliver help for a script or function. The accompanying CD contains a six page PDF file that gathers the best practices together on a chapter-by-chapter basis. This information should have been in the book as an appendix in the worst case or best still as a summary section at the end of each chapter.

It may be argued that there wasn’t space in the book but my reading shows plenty of areas that could be cut to make room such as:

  • Table 2-1 – three pages listing the members of the Win32_Process object
  • Table 2-2 – over a page listing members of the EventLogConfiguration class
  • Table 2-3 – over a page listing the members of the Win32_PingStatus class
  • Page 78-80 – multiple views of an Excel spread sheet and message dialogs
  • Table 4-1 – over a page of AD schema object members
  • Table 4-2 – over a page of ActiveDirectorySchema class members
  • Page 111 – a page listing the
  • Etc etc

There is a lot of repetition in the output shown for various commands that could have been edited down.

By all means supply the best practices as a printable quick reference but they should have been available in the book.

After reading the book my immediate thought was that I couldn’t decide who the book was aimed at or which version of PowerShell was being discussed. The title includes the phrase “PowerShell 2.0” but the book includes a lot of material that is aimed at PowerShell 1.0 for instance why spend time on Win32_PingStatus when PowerShell 2.0 has the Test-Connection cmdlet?

A total of 243 scripts are supplied on the CD. No, I didn’t count them I used

(Get-ChildItem -Filter *.ps1 -Recurse).count

The scripts in many cases are written to demonstrate a point rather than being useful for administrators. I wonder if the same points could have been

The lists of WMI providers for Windows Server 2003 and Windows XP are useful but a better aid would have been to extend this to cover Vista, Windows 7, Windows Server 2008 and Windows Server 2008 R2.

So having said all of that, how does the book stand on delivering best practice for using PowerShell 2.0. That’s a difficult one to answer because best practice can be very subjective. I don’t use aliases in scripts and tend to avoid them at the prompt. I especially abhor the use of % and ? as aliases for foreach-object and where-object. Other people think the only way to go is to alias everything. In my opinion the book doesn’t deliver PowerShell 2.0 best practice. I’ll give some examples to sho why I think that.

In chapter 2 the Get-Service cmdlet is discussed and Win32_Service is used to change the startmode of a service. This was an accepted, and good, technique in PowerShell 1.0 but in 2.0 I would use Set-Service. It has a –computername parameter so we can work with remote machines – don’t need to drop into WMI. Less code and less complicated = best practice.

On page 59 “With the presence of Ping.exe and the Win32_PingStatus WMI, you really do not need the Test-Connection cmdlet.” I find that statement unbelievable. Ping.exe returns text which we would then have to parse – bad practice. Test-Connection uses Win32_PingStatus but wraps it up and makes it easier to use.

The list of ADSI connection strings on page 68 is very useful information.

On page 77 the function has a [switch] parameter for debugging – not best practice, we’ll come back to that later.

In chapter 3 a lot of time is spent discussing getting information of an Excel spread sheet to use as input to user creation scripts. Why? Dump it to a CSV file – its easier, less coding and PowerShell works grate with CSV files on the pipeline.

Chapter 4 discusses querying Active Directory using an ADO database. its claimed to be the traditional method for querying AD – maybe in VBScript because we didn’t have anything else. In PowerShell 1.0 I would use a DirectorySearcher object – faster, more flexible and easier to work with. In PowerShell 2.0 this is even easier with the [ADSISEARCHER] type accelerator.

Chapter 5 looks scripting opportunities. One script checks the Operating System version. Why not go with

Get-WmiObject -Class Win32_OperatingSystem | select caption

It gives the version as a name so we don’t have to decipher the version numbers.

Chapter 6 spends a lot of time discussing ways to pass parameters into a function or script. Use the param statement! It gives maximum flexibility and power and provides consistency. Best practice is to be consistent not change because you only have one or two parameters. Also in chapter 6 why discuss function libraries when we have modules in PowerShell 2.0. More power + more flexibility = best practice.

Chapter 7 discusses using the System.random class. Why when we have a Get-random cmdlet in PowerShell 2.0. Using a cmdlet ahead of scripting should always be best practice.

Chapter 10 on help really misses the point. Why use here strings when we can use comment based help which isn’t actually mentioned in this chapter! When it surfaces in chapter 11 its referred to as “Help Function tags”. Never heard that phrase before and we find out more about it by using

Get-help about_Comment_Based_Help

More on $args in chapter 12. Use the param statement it is a better practice. Why discuss creating code to check the value of a parameter falls within a given range when we have a [ValidateRange()] option we can invoke on the parameter. As a generalisation the advanced function capabilities are not covered very well.

Would you really store a password in the Registry?

There is a coding error in the script on page 457. Compare with page 458.

When testing the syntax of scripts both PowerShell ISE and PowerGUI editor, among others, will supply clues as to where the errors can be found. If you want a good syntax check try using Test-Script from PowerShell Community Extensions 2.0. Also why write your own code to invoke debug or whatif when you can get them in advanced functions by using [CmdletBinding(SupportsShouldProcess=$true)]

Which brings up another point – why use filters when you have advanced functions?

Some general comments:

  • Some of the figures – especially those showing PowerShell error messages are difficult to read.
  • The appendices could for the most part be dropped.
  • Appendix A could be generated with get-help *-* | ft name, synopsis –Wrap
  • Appendix B is delivered by Get-Verb
  • Appendices D & E are 40 pages of listings of WMI and .NET classes. Put them on the CD and put the best practice summary in the book.
  • The appendix with WMI error messages is useful.

If the book is delivering best practice why discuss all the options. Supply the best practice.

Overall this book tries to cover all options for performing a task in PowerShell and doesn’t concentrate on PowerShell 2.0 as you would expect from the title. It feels like a book that was originally targeted at PowerShell 1.0 and had some of the PowerShell 2.0 bits shoe horned in – especially in the early chapters.

Judging against my criteria:

  • Is it technically accurate? The scripts work but I don’t think they always conform to best practice. Some of the methods described in the book duplicate functionality that is already available in PowerShell which drags the mark down to 6/10.
  • Does deliver the material it claims to deliver? I think the book, at least partially, fails on this point. I would want to be able to pick up a best practice book and find, easily and quickly, the recommended best practice for performing a PowerShell task. Can’t do this with the book in its current format. I can’t give more than half marks on this one 5/10.
  • Is worth the cost of purchase and the time I spend reading it? I leant a few things from the book and some of the tables supply information that is difficult to track down. Some of the sidebars are especially useful. I’d struggle to justify more than 7/10.

Overall I would say that it’s a good idea for a book that doesn’t actually achieve what it tries to do. I would like to see a second edition but involving the PowerShell community to a greater extent so that a true consensus on PowerShell best practice could be made available.

Technorati Tags: ,

Module Structure

Since PowerShell 2.0 appeared I have been steadily converting a lot of scripts to modules. Up until now I have been using a single .psm1 file (and possibly a .psd1 file for the manifest).  The .psm1 file contains all of the functions.  Some of my .psm1 files are getting a bit unwieldy so I decided to split them up using the modules in the Windows 7 PowerShell pack as an example.

In the pack each module folder has a set of .ps1 files – one function per file.  The .psm1 file then dot sources the .ps1 files.  That seemed about what I wanted but I wanted to cut the number of files.

What I ended up with was something like this – using my filefunctions module as an example

FileFunctions.ps1
FileFunctions.psd1
FileFunctions.psm1
FolderFunctions.ps1
RecycleBinFunctions.ps1
ShareFunctions.ps1
TempFunctions.ps1
Trinet.Core.IO.Ntfs.dll
XmlFunctions.ps1

The .psd1 file loads the DLL (used for working with Alternate Data Streams) and calls the .psm1 file.  There are a number of helper functions in this module that I don’t want to expose.  I originally controlled this in the .psd1 file

The .psm1 file looks like this

. $psScriptRoot\FileFunctions.ps1
. $psScriptRoot\FolderFunctions.ps1
. $psScriptRoot\RecycleBinFunctions.ps1
. $psScriptRoot\ShareFunctions.ps1
. $psScriptRoot\TempFunctions.ps1
. $psScriptRoot\XmlFunctions.ps1

 

where $psScriptRoot contains the directory from which the module is executed. Useful if you move the location of your modules.

I use Export-ModuleMember in each .ps1 file to control which functions are exposed.

So far it seems to work and does the job.  I’m trying it on few modules before converting to this as a standard but it seems to give me the best of flexibility, control and keeping the code manageable.

Technorati Tags: ,

Folder names without spaces

Problem

How can I prevent the creation of folders with spaces in the name from within my script.

Solution

001
002
003
004
005
006
007
008
009
010
function New-Folder {
param (
    [string]$path = "C:\Test",
    [string]$name = ""
)
    if($name.Indexof(" ") -ge 0){throw "Folder name contains spaces"}
    if (!(Test-Path -Path $path)){throw "Invalid path"}
    if ($name -eq ""){throw "Invalid folder name"}
    New-Item -Path $path -Name $name -ItemType directory
}

Define the path and new folder name to the function as parameters.  Use the IndexOf method to check for spaces in the name and throw an exception if one is found.  Just need a general check as one space is enough to stop the creation.  While we are at it we will check that the path exists and that the name is not empty because

"".IndexOf(" ") returns –1 i.e. it doesn’t find any.

We can then use new-item to create the folder as normal

Technorati Tags: ,

Variables I

In my scripts I usually create variables as

$a = 3

or similar.

There are a number of cmdlets for working with variables

Clear-Variable
Get-Variable
New-Variable
Remove-Variable
Set-Variable

Lets start with get-variable

PS> Get-Variable

Name                           Value
----                           -----
$                              *variable
?                              True
^                              Get-Command
_
args                           {}
ConfirmPreference              High
ConsoleFileName
DebugPreference                SilentlyContinue
Error                          {}
ErrorActionPreference          Continue
ErrorView                      NormalView
ExecutionContext               System.Management.Automation.EngineIntrinsics
false                          False
FormatEnumerationLimit         4
HOME                           C:\Users\Richard
Host                           System.Management.Automation.Internal.Host.InternalHost
input                          System.Collections.ArrayList+ArrayListEnumeratorSimple
MaximumAliasCount              4096
MaximumDriveCount              4096
MaximumErrorCount              256
MaximumFunctionCount           4096
MaximumHistoryCount            64
MaximumVariableCount           4096
MyInvocation                   System.Management.Automation.InvocationInfo
NestedPromptLevel              0
null
OutputEncoding                 System.Text.ASCIIEncoding
PID                            5064
PROFILE                        C:\Users\Richard\Documents\WindowsPowerShell\Microsoft.PowerShell_profile.ps1
ProgressPreference             Continue
PSBoundParameters              {}
PSCulture                      en-GB
PSEmailServer
PSHOME                         C:\Windows\System32\WindowsPowerShell\v1.0
PSSessionApplicationName       wsman
PSSessionConfigurationName    
http://schemas.microsoft.com/powershell/Microsoft.PowerShell
PSSessionOption                System.Management.Automation.Remoting.PSSessionOption
PSUICulture                    en-US
PSVersionTable                 {CLRVersion, BuildVersion, PSVersion, WSManStackVersion...}
PWD                            C:\scripts
ReportErrorShowExceptionClass  0
ReportErrorShowInnerException  0
ReportErrorShowSource          1
ReportErrorShowStackTrace      0
ShellId                        Microsoft.PowerShell
StackTrace
true                           True
VerbosePreference              SilentlyContinue
WarningPreference              Continue
WhatIfPreference               False

 

We can see a similar listing with

PS> Get-ChildItem -Path variable:

To view an individual variable we have

PS> Get-Variable -Name PWD

Name                           Value
----                           -----
PWD                            C:\scripts

PS> $variable:PWD

Path
----
C:\scripts

 

Notice the difference in the way the information is returned.  If we only want the value of the variable we need

PS> Get-Variable -Name PWD -ValueOnly

Path
----
C:\scripts

PS> $variable:PWD.Path
C:\scripts

 

If we dig into this a bit we see that

PS> Get-Variable -Name PWD | gm

   TypeName: System.Management.Automation.PSVariable

 

PS> Get-Variable -Name PWD -ValueOnly | gm

   TypeName: System.Management.Automation.PathInfo

 

PS> $variable:PWD | gm

   TypeName: System.Management.Automation.PathInfo

 

There is an extra wrapper when we are dealing with Get-Variable that we need to be aware of.

One other benefit to using Get-variable is that we can use the –scope parameter to examine the variables in specific scopes.

Technorati Tags: ,

Rename files problem

I came across this interesting problem and thought it was worth a post.

Problem

You have a folder with a set of files with names like this

Black or Blue [15665782345].txt
Black or Green [1068682345].txt
Black or White [12345].txt
Black or Yellow [A2G345].txt
Blue or Green [14786862345].txt
Pink or Yellow [12345465785].txt
Purple or Gold [345612345].txt
Red or Blue [112132345].txt
Yellow or White [4335433678512345].txt

You need to rename each file so that the [] and stuff between them are removed.

Rename-Item won’t work because of the [].

Solution

001
002
003
004
005
006
Get-ChildItem | foreach {
  
    $name = $_.BaseName.split('`[')
    $newname = $name[0].TrimEnd(" ") + $_.Extension
    Move-Item -LiteralPath $_ -Destination $newname -Force
}

 

Read the file list with Get-ChildItem.  Create the new name by splitting the Basename at the “[“ character and Trim any trailing blanks. Add the extension back on.

Use Move-Item to move the file to the same folder with a new name.  The file is renamed.

I did try using the –split operator instead of split() but didn’t have any success

Technorati Tags: ,,

Paths: IV

The last of our path cmdlets is Convert-Path.  This converts a PowerShell path to a PowerShell provider path.

This sequence should explain how it works.

 

PS> cd HKLM:
PS> cd software
PS> cd microsoft
PS> Get-Location

Path
----
HKLM:\software\microsoft

PS> Convert-Path -Path (Get-Location)
HKEY_LOCAL_MACHINE\software\microsoft

 

change drive to the HKLM: registry drive

navigate into the software\microsoft key

Get-Location returns the provider path i.e. file system style

Convert-path returns the path in the registry.

Technorati Tags: ,,

Paths III

Resolve-Path is interesting in that

Resolve-Path c:\scripts\*

"c:\scripts\*" | Resolve-Path

both supply a listing of the contents of the folders.  A sample is shown

C:\scripts\WMICookBook
C:\scripts\WPF
C:\scripts\XML
C:\scripts\auto.csv
C:\scripts\computers.txt
C:\scripts\emptyfolders.txt
C:\scripts\encrypted.txt
C:\scripts\firewall.ps1
C:\scripts\import-mymodule.ps1

 

It shows files and subfolders with no way to distinguish. There isn’t a way to recurse through sub folders

A System.Management.Automation.PathInfo object is returned with these properties

Drive
Path
Provider
ProviderPath

default is just to display the path.

I’m not 100% sure what I’ll be using this cmdlet for but compare the output of

PS> ls hklm:\software

    Hive: HKEY_LOCAL_MACHINE\software

SKC  VC Name                           Property
---  -- ----                           --------
  5   0 Adobe                          {}
  2   0 Atheros                        {}
  1   0 ATI Technologies               {}
  1   0 AVG                            {}

 

and

 

PS> Resolve-Path hklm:\software\*

Path
----
HKLM:\software\Adobe
HKLM:\software\Atheros
HKLM:\software\ATI Technologies
HKLM:\software\AVG

and I’m getting a few ideas of where it might be useful

Technorati Tags: ,

Paths: II

Two related cmdlets deal with paths – split-path and join-path.

Starting with split-path we can use it to split the path to retrieve the file or directory

PS> Split-Path -Path C:\Scripts\DC02\Scripts\DNS\new-mx.ps1
C:\Scripts\DC02\Scripts\DNS


PS> Split-Path -Path C:\Scripts\DC02\Scripts\DNS\new-mx.ps1 -Parent
C:\Scripts\DC02\Scripts\DNS


PS> Split-Path -Path C:\Scripts\DC02\Scripts\DNS\new-mx.ps1 -Leaf
new-mx.ps1

The default gives the parent container.

We can also separate the drive

PS> Split-Path -Path C:\Scripts\DC02\Scripts\DNS\new-mx.ps1 -Qualifier
C:


PS> Split-Path -Path C:\Scripts\DC02\Scripts\DNS\new-mx.ps1 -NoQualifier
\Scripts\DC02\Scripts\DNS\new-mx.ps1

 

join-path does the opposite

PS> Join-Path -Path C:\Scripts\DC02\Scripts\DNS -ChildPath new-mx.ps1
C:\Scripts\DC02\Scripts\DNS\new-mx.ps1

Useful for when you are building file paths

Technorati Tags: ,

Paths: I

There are a number of cmdlets for working with paths

PS> Get-Command *path | select Name

Name
----
Convert-Path
Join-Path
Resolve-Path
Split-Path
Test-Path

 

Test-Path is the one I probably use most

PS> Test-Path -Path "C:\Users\Richard\Documents\PowerShell in Practice\TODO.txt"
True
PS> Test-Path -Path "C:\Users\Richard\Documents\PowerShell in Practice\Chapter1.docx"
False

It returns a Boolean – true if the path is found and false if not.  BTW don’t worry about the missing chapter it’s somewhere else.

One of my common uses for this is to test if a file exists before doing something

PS> if (Test-Path -Path "C:\Users\Richard\Documents\PowerShell in Practice\TODO.txt"){Write-Host "Found it"}
Found it


PS> if (Test-Path -Path "C:\Users\Richard\Documents\PowerShell in Practice\chapter1.docx"){Write-Host "Found it"}

else{Write-Host "Its gone"}
Its gone

 

We can also test if the path points to a file or a directory

PS> Test-Path -Path "C:\Users\Richard\Documents\PowerShell in Practice\TODO.txt" -PathType "leaf"
True
PS> Test-Path -Path "C:\Users\Richard\Documents\PowerShell in Practice\TODO.txt" -PathType "container"
False

The path cmdlets also work in other providers as we shall see later

Technorati Tags: ,