Approval Workflow in SharePoint 2010 Server / Office 365

Long-time no-blog – but yes, I am still alive  Smile  Just a quick post on something I’ve been fighting with for most of this morning.  We’re helping a new client implement Office 365, and specifically I’m adding a document library with workflows to handle a multi-step approval process.  SharePoint Server 2010 includes an OOTB Workflow (Approval – SharePoint 2010) for this functionality.  However, when I went to add a workflow to my document library, I only had two workflows listed:  Disposition Approval and Three-State – I was missing three other OOTB workflows (Approval, Collect Feedback, & Collect Signatures).  Reviewing the list of workflows under Site Settings –> Site Administration –> Workflows, I noticed the missing workflows were showing a status of Inactive.  Searching for the issue, I found forum threads where other people were also experiencing the same issue, and with on-premise SharePoint 2010 Server, not just Office 365. 


Most threads offered the same basic advice of making sure the Workflows feature was activated under Site Settings –> Site Collection Features, and our feature was activated.  One post in another thread indicated that the Approval workflow was dependent on the SharePoint 2007 Workflows feature – which was disabled.  However, after enabling the SharePoint 2007 Workflows feature under Site Settings –> Site Collection Features, the missing workflows were still not showing up as available options when adding a new workflow to a list or library.   Reviewing the workflow list again under Site Settings –> Site Administration –> Workflows, the missing workflows were now showing as Active.


I finally resolved the issue – and the best I can tell is that there is a dependency in the order that Site Collection Features are activated in order for these workflows to appear.  I had tried simply deactivating and reactivating the Workflows feature with no effect – which was after the SharePoint 2007 Workflows feature was activated.  The trick for me was to deactivate the Workflows feature, deactivate the SharePoint 2007 Workflows feature, then reactivate the SharePoint Workflows feature, and finally reactivate the Workflows feature.  In my experience at least – the SharePoint 2007 Workflows feature needs to be activated before the Workflows feature for these additional OOTB workflows to be available.

It’s about time

Supporting my customers, I get to deal with a wide variety of peripherals and multi-function devices.  From a pure administration & functionality standpoint, I prefer (and recommend) Ricoh’s Aficio series of multi-function devices (compared to Canon, Sharp or even Konica/Minolta Bizhubs).  Regardless, those are all quality devices for a decent sized client.  What has been a constant struggle for me has been finding a decent, cost-effective multi-function with self-contained walk-up functionality for the small client / remote office.  We’ve been consistently disappointed with HP devices over the last few years (I can’t begin to count how many warranty claims we’ve had on M2727nf or M1522 devices.  And even when they don’t flat-out break, it seems the ADF and paper tray are just wearing out way too quick.

My other gripe with the HP multi-functions, are the same as most of the other devices on the market – for scanning functionality, they require you run software on the PC.  So scan to email is a matter of walking the document over to the device, going back to your desk, opening the HP software, jumping through the wizard and selecting the scan destination, then completing the email message and manually sending it.  While in theory this isn’t that difficult – I have some users who don’t keep Outlook open – and they can just never remember that even though the HP scan app will open a new message & attach the scan – when they click send the message isn’t actually sent if Outlook isn’t open – it sits in the outbox until they open Outlook.  And don’t get me started on scan-to-folder, especially when you are working with users hot desking with kiosks . . .  

So we’ve looked at HP devices, as well as low to mid level Canon, Lexmark & Samsung devices.  We even have a customer with a Xerox Phaser MFC – and we’ve been consistently disappointed in the way network scanning seems to be an after-thought at best.

I must say though, that I think I have finally found a winner.  I just replaced my MFC in my office – which quite honestly is used for scanning more than anything else.  I purchased a Brother MFC-8890dw, and wow am I impressed.  Now obviously, it’s not going to have the nice touch-screen display and polished web interface that we get with the 5-figure Ricoh / Bizhub type devices.  But for the small office, it does network scanning extremely well.  The best in this area – the ability to scan-to-email or scan-to-file (SMB or FTP) by walking up to the device – no PC needed!  Configure multiple scan-to-file profiles (each profile can be either SMB or FTP, with separate authentication for each profile).  Configure the device once with your outbound SMTP server info, then you can either use preset scan-to-email destinations, or manually key in the destination email address as needed.  It even has a front USB port allowing you to scan directly to a USB thumb drive.  As for file formats, it supports scanning to TIFF, PDF and Secure PDF with multiple options for color scale and resolution. 

Someone correct me if I’m wrong, but I think this is about the only device with this level of network scanning functionality in this price range (MSRP $499).  And the feature list keeps going:  not only do we have true network scan-to-email and scan-to-file from the device, we also have a network twain driver for the scanner – so you can scan from your favorite desktop app (Adobe Acrobat, etc.) if you want to.  The B&W laser printer is rated at 35ppm, and has full duplex (the ‘d’ in the 8890dw).  There’s an optional 2nd 250 sheet paper tray available as well.  But the duplex functionality doesn’t stop with printing – oh no.  The 50-sheet ADF has it’s own full-duplex functionality, allowing for automatic two-sided copying & scanning.  The 40 available speed-dials can be programmed as either fax speed-dials, or one-touch scan-to-email destinations.  Finally, in addition to USB and 10/100 ethernet, this device also includes standard 802.11g wireless connectivity (the ‘w’ in 8890dw), giving us the flexibility to place the device where it’s convenient for users – not necessarily where we have a network drop.  And the replacement supplies are very reasonable, with the standard 3k page toner running about US $75 and a high-yield 8k toner running about US $115.

I’ve only had the device a few days – but so far I am definitely impressed with this device, and especially the functionality it provides for the price point it is at.  The few negatives I’ve encountered so far are extremely minor:  the ADF is louder than I would like, and the web-based administration interface can be a bit slow at times, but I’m more than happy to live with those trade-offs for the extensive functionality.  This is definitely the device we’re going to be recommending to customers who need a device in this class.  So there you have it – a completely unsolicited / uncompensated review and recommendation of the Brother MFC-8890dw

MSKB 961143 fails to install on SBS 2003

 

So I’m having an exciting Friday night at home catching up on some support tickets, and I have one monitoring ticket for one of my managed SBS 2003 boxes where Microsoft update 961143 fails to install repeatedly.  Normally, whenever I have a patch that fails to install via Kaseya – simply logging in to the device & running Windows Update to install the problem patch resolves the issue.  Yeah – not so much with this one.  Installing the update via Windows Update, or by downloading the update & running it manually – it consistently fails.  Running the install manually, I just get a pop-up saying something vague to the effect that installation failed and click OK to undo changes.  I searched the web, and continually found thread after thread after thread of people having the same issue, but no solution.  Some suggested solutions included verifying the companyweb site was using the default app pool, some saying to check a registry key or two – but nothing worked.

The windowsupdate.log was vague – and the error code it provided (0x8007f070) didn’t help any with my web searches either.  I then dug down and found the individual update installation log (%temp%\QFE73170.log) and reviewed its contents:

2010/05/14 21:33:36    ————————————————————–
2010/05/14 21:33:36    begin installing the fix
2010/05/14 21:33:36    enable kerberos if necessary on companyweb
2010/05/14 21:33:36    the current start type of iis admin service is: 2
2010/05/14 21:33:36    find virtual server id of hostName: companyweb
2010/05/14 21:33:36    iterate through each IISWebServer under IIS://localhost/w3svc
2010/05/14 21:33:36    current server binding: IIS://localhost/w3svc/1, 192.168.16.2:80:
2010/05/14 21:33:36    current server binding: IIS://localhost/w3svc/1, 127.0.0.1:80:
2010/05/14 21:33:37    current server binding: IIS://localhost/w3svc/2, :6345:
2010/05/14 21:33:37    current server binding: IIS://localhost/w3svc/3, :8081:
2010/05/14 21:33:37    current server binding: IIS://localhost/w3svc/4, 192.168.16.2:80:companyweb
2010/05/14 21:33:37    found virtual server, server path is: IIS://localhost/w3svc/4/root
2010/05/14 21:33:37    get single property: IIS://localhost/w3svc/4/root,AppPoolID
2010/05/14 21:33:37    get single property successfully
2010/05/14 21:33:37    current virtual server is using app pool: DefaultAppPool
2010/05/14 21:33:37    get single property: IIS://localhost/w3svc/apppools/DefaultAppPool,AppPoolIdentityType
2010/05/14 21:33:37    get single property successfully
2010/05/14 21:33:37    identity of current virtual server’s app pool is: 0
2010/05/14 21:33:37    not using network service to run current virtual server


That last line in the log was the key:  For whatever reason, MSKB 961143 will bomb out if the DefaultAppPool is not using the NETWORK SERVICE identity.  I opened up IIS Admin, expanded <server> | Application Pools.   I right-clicked on DefaultAppPool and selected Properties.  I then went to the Identity tab on the app pool properties page, and sure enough – my DefaultAppPool was running under the LOCAL SYSTEM identity.  I changed the identity from LOCAL SYSTEM to NETWORK SERVICE, and clicked OK to save the changes.  Back in IIS Admin, I right-clicked on the DefaultAppPool again and selected Recycle to stop & restart the app pool.  After recycling the DefaultAppPool, I re-ran the KB 961143 installer and the update installed successfully.

SharePoint Disaster Recovery

Join me Thursday morning at 11am CST as I present on Disaster Recovery Planning for SharePoint as part of ThirdTier.net’s Third Thursday webinar series:

 

Third Tier has invited you to attend an online meeting using Live Meeting.

Follow these steps:

1. Copy this address and paste it into your web browser:

https://www.livemeeting.com/cc/harborcomputerservices/join

2. Copy and paste the required information:

Meeting ID: JCHM5Z

Entry Code: g$P5j6,Kq

Location: https://www.livemeeting.com/cc/harborcomputerservices

Local access to your SharePoint 3.0 site

OK – just a quick poll . . .    Raise your hand if you’ve seen this:

You install Windows SharePoint Services 3.0 (either on a Windows 2003 or 2008 server) and create a new web app & site collection.  While the site works great for all of your local clients, and even works externally – you can’t access the site from the server it is running on.  Specifically – when you try to browse to the site on the local server, you get prompted for credentials 3 times before getting a generic 401 unauthorized error.

I’ll admit I have been seeing this for quite some time, but have always been too busy to track down the cause, especially since the easy workaround is to just access the site from another machine.  Well, I finally did some digging and was able to identify the cause & the solution.  This is actually caused by a failing loopback check – which is the exact same scenario that causes the infamous 2436 Gatherer errors for our companyweb in SBS 2008.  And to keep things consistent – the exact same fix for the 2436 errors will fix our inability to access a WSS 3.0 site on the local server it’s running on.

You can find the specific steps on the SBS support blog:

http://blogs.technet.com/sbs/archive/2009/05/07/event-2436-for-sharepoint-services-3-search.aspx

Just add the URL of your site to the BackConnectionHostNames registry entry, run an iisreset and you’ll be good to go!

Where did it all go?

Ok – so I have a somewhat funny story to share . . .    About a week ago, I received a monitoring alert via Kaseya that free space on the C: drive on my SBS 2008 server was getting low.  I logged in to the server, opened My Computer and it showed that I was using 73 GB of my 80GB C: partition.  So I downloaded TreeSize Free to see what was taking up all of the space.  The problem I ran in to was that TreeSize was showing that I was only using 31.9 GB of space on my C: – no where near the 73 GB that Windows was reporting.  TreeSize did indicate that it couldn’t access the C:\PerfLogs or C:\System Volume Information.  I manually verified the PerfLogs folder was empty, and I did find that I had approx 8GB in ShadowCopies for the C: drive that I didn’t need since all of my critical shares had been moved to a different partition, so I disabled ShadowCopies on the C: drive, but that still left me with a 33GB discrepancy between Windows & TreeSize . . .

At this point, I am going to share two crucial bits of information:  1) This is the first time I’ve dealt with low-drive space on a Windows 2008 box.  2)  I’ve been using TreeSize for years, and by force of habit, I always open My Computer, right-click on the drive I want to scan and launch TreeSize from the context menu.   So can you see where I went wrong?

Yep – I was quietly bitten by UAC in SBS 2008.  By launching TreeSize in my normal fashion, TreeSize was not running with elevated permissions and was unable to access all of the directories on the drive, many of which were several layers deep.  Interestingly enough, TreeSize Free didn’t throw any errors when it encountered a directory it couldn’t access.  Once I launched TreeSize Free from the Start Menu with elevated permissions, it was able to scan the full drive and show me my smoking gun – 27GB of IIS logs for the WSUS Administration site collected over the last 12 months.  So after cleaning up my unnecessary Shadow Copies & purging old IIS logs, I’m back to 41.2 GB (51.5%) free space on my C: drive . . .

Group Policy Loopback Processing

Subtitled – “Wow, I learned something new today!”  [:)]

So in the Third Tier support queue today, Jon posed an interesting question:

How do I exclude Folder Redirection from applying to one domain-joined laptop that is out of the office & disconnected from the domain most of the time?

To revisit Group Policy basics for everyone – GPOs can apply to either computer accounts or user accounts.  GPOs that apply to computer accounts are processed when computers boot up (we’ve all seen the “Applying Computer Settings” message during startup), and GPOs that apply to user accounts are processed during login.  Obviously, Folder Redirection is a user setting in Group Policies, and GPOs don’t have the same targeting options that Group Policy Preferences do.  So how do we have different GP user settings implemented when users log in to specific machines?   Via User Group Policy loopback processing, of course . . .

So what is User Group Policy loopback processing?  It is a Group Policy setting that applies to Computer accounts.  When enabled, it effectively tells a computer to process User Settings in GPOs that apply to the computer account whenever a user logs on to that computer.  As a result, we are able to define user GP settings in a GPO applied to computer accounts instead of user accounts.

User Group Policy loopback processing can be enabled in one of two modes:  merge or replace.  In merge mode, both GPOs applying to the user account and GPOs applying to the computer account are processed when a user logs in.  GPOs that apply to the computer account are processed second and therefore take precedence – if a setting is defined in both the GPO(s) applying to the user account, and the GPO(s) applying to the computer account, the setting in the GPO(s) applying to the computer account will be enforced.  With the replace mode, GPOs applying to the user account are not processed – only the GPOs applying to the computer account are applied.

In Jon’s specific case, he wanted to exclude Folder Redirection for one remote laptop.  The folder redirection settings in Group Policies do not have a “disable” option – only “Not Configured” or enabled via the “Basic” or “Advanced” modes.  Since there isn’t an option to explicitly disable Folder Redirection, the merge option would not meet Jon’s needs, since the user GPOs would be applied and Folder Redirection would remain enabled on the laptop.  By using the “Replace” mode and not defining Folder Redirection in the GPO that applies to the computer account, Jon is able to achieve his desired result.

Take-aways on User Group Policy Loopback Processing:

  • This is a COMPUTER setting, which is found under Computer Configuration | Administrative Templates | System | Group Policy | User Group Policy Loopback Processing Mode
  • You want to create a new OU in AD that is dedicated to computer accounts that will have loopback processing enabled.
  • Create a new GPO in your new OU to enable User Group Policy Loopback Processing and set the appropriate mode (merge / replace).
  • You will define the user settings you want to apply to the loopback-enabled PCs via GPOs in this same new OU.  You can define these settings either in the same GPO where you enabled the User Group Policy Loopback Processing setting, or you create another new GPO in the same OU for your user settings.
  • Remember that when using the REPLACE mode, none of your other user GPOs will be applied when a user logs in to a machine that has loopback processing enabled.  ONLY the user settings that are defined in the GPOs that apply to that machine will be applied.

Killing off ISA

Earlier today Susan blogged about upgrade season in her office, and getting ready to migrate from SBS 2003 to 2008.  In that post, she talked about uninstalling ISA and mentioned a post that Kevin has on that subject.  I thought I’d take a moment to expand a little bit on Kevin’s post and add a few thoughts from my own battle scars with removing ISA.

First and foremost – Kevin mentions removing the ISA firewall client from all of your PCs before you remove ISA from the server.  I cannot overstate how crucial this step is.  The ISA 2004 firewall client uninstaller wants access to the original installation MSI, which lives in a share on your SBS box.  This share is actually the Clients folder in the ISA installation directory.  So what happens when you remove ISA from your SBS?  You guessed it – the mspclnt share with the firewall client installation files is removed, which means any firewall clients still installed on PCs are not going to be happy when you try to remove them and they can’t find the MSI.

Since the Clients folder under the ISA installation folder is typically only about 5MB, I copy this folder to a safe spot on the server – usually my Tech directory where we keep various utilities and scripts.  Here’s why – more and more, customers are backing up their workstations whether via Acronis / StorageCraft / Windows Home Server.  We may find ourselves at a point in the not so distant future after removing ISA that we need to restore a PC from an image taken before ISA was removed, and need to remove the firewall client again.  Or we may discover a forgotten PC / laptop that we missed removing the firewall client from.  There’s all sorts of scenarios – but by keeping the Clients folder in-tact, we can share that out with the original mspclnt share name at any time and be able to uninstall the firewall client just like ISA was still installed on the server.  Without the mspclnt share, you have a very VERY ugly path in front of you, and it is safe to say that you may end up facing the decision of living with the firewall client still on the machines, or wiping & re-installing the OS . . .

Second – Kevin also makes a brief mention about proxy settings.  When you uninstall the firewall client from a PC, it will automatically disable proxy settings for the user account that is running the uninstall, but not for any other users on the machine.  So if you have a PC that multiple users log in to, or if you are running a terminal server, be prepared for some proxy pain.  I actually have a little VBScript that disables proxy settings for the current user by changing the value of the HKCU\Software\Microsoft\Windows\CurrentVersion\InternetSettings\ProxyEnable key from 1 to 0.  I modify my login script to call the VBScript, in effect ensuring proxy gets disabled for each user when they log in to each machine.  

The other aspect with proxy settings to keep in mind are your server-side applications.  Unless you modified your ISA firewall policy to allow unauthenticated outbound http access from the server itself, you most likely specified proxy information for apps like Trend Micro’s Worry-Free Business Security or even WSUS – so that they can download their updates automatically.  After removing ISA, you no longer have a proxy server, which means apps configured to use a proxy aren’t going to be able to get out to the internet.  As a result, you stop getting automatic updates for things like A/V.  So you will need to manually update the connection settings in these apps to remove the proxy settings previously defined.

So – here’s my quick checklist for removing ISA from your network & installing a hardware firewall:

  1. Prep your hardware firewall in a lab setting.  Enter in all public IP info, disable DHCP, and create all of our inbound rules.  It’s best to do this while ISA is still installed & working, so you can refer to the rules in ISA to make sure you don’t miss any necessary inbound rules for your environment.
  2. Backup your ISA configuration.  While we’re moving away from ISA permanently, if we do encounter an issue with the new hardware solution where something isn’t working that was working with ISA, the ISA backup is an XML file that is relatively easy to read to see what rules you had and what they did without having to reinstall & restore ISA on your SBS.
  3. Open up your outbound access in ISA by creating the proverbial ALL/ALL/ALL rule.  In other words, create a new access rule in ISA allowing All outbound traffic via all protocols for all users/computers.  Much of the internet access in ISA on SBS is dependent on users being members of the Internet Users security group.  The firewall client on the PCs is what actually passes user info to the ISA server so it can check group membership.  Once we remove the firewall client from PCs, ISA isn’t going to be getting user info and some stuff that worked before isn’t going to work now.  If you only have 5 PCs and are moving from ISA to your hardware firewall on a Sunday when no one is working, you might be able to skip this step.  But if you have a larger number of PCs, etc. this helps to insure you don’t disrupt users’ internet access too much while removing the firewall client . . .
  4. In my case, I update my domain login script to call my DisableProxy.vbs script at this point.
  5. Uninstall the firewall client from ALL PCs.  Again – see my notes above.  Your life will be MUCH simpler if you insure the firewall client is completely removed from all PCs before removing ISA from your server.
  6. Copy the contents of the mspclnt share (%programfiles%\Microsoft ISA Server\Clients by default) to a safe location on the server, and plan to keep this folder safe for some time  [:)]
  7. Follow Kevin’s steps 3-9 to remove ISA from the server.
  8. When you re-run the CEICW, it should automatically update the DHCP scope option on the server to use the internal IP of the new hardware firewall as the default gateway setting.  If you have any devices that are using static IP addresses, you will need to manually update those with the new gateway.  (HINT:  Take a few extra minutes to create DHCP reservations for each device using a static IP, and change those devices to DHCP – so if you have another network reconfiguration in the future, all you have to do is reboot those devices instead of reconfigure [:)].    For all of your other DHCP devices, you will want to run an ipconfig /release followed by an ipconfig /renew to update their IP settings so they pull the new gateway, or you can reboot them as well.  HINT 2 – PSTools are your friend.  Create a batch file with the two ipconfig commands, and use PSExec to push & execute the batch file on all machines in the domain from the server.  5 minutes tops to update the IPConfig on all domain machines (that are online) instead of sneakernetting . . .
  9. ALSO – if you followed Jim Harrison’s steps to configure auto-detection of proxy settings on your SBS LAN, you want to remove the wpad A record from your internal AD domain forward lookup zone in DNS – otherwise you may have devices pulling proxy settings for pointing to your non-existent proxy server via auto-detect.

So that’s my addendum to Kevin’s excellent post

 

P.S. . . .   and if you haven’t decided on a hardware firewall yet, I highly recommend Calyptix devices.  These are the standard devices we are implementing when migrating customers to SBS 2008.

The Savvy-Tech’s Hardware-Independent-Restore

Ok, so I thought I’d share a real-world support scenario that happened to me today:

So I have a new contract customer I just signed a couple weeks ago, and they went live as of 9/1.  I was doing various maintenance tasks on their network over the weekend, removing unnecessary apps from PCs to improve performance, getting patches installed, etc.  So about the only thing that was left last night was patching their Windows 2003 terminal server.  So I push the patches out via Kaseya, patches install successfully, the server initiates a reboot – but never comes back.  Now, I’ve been doing remote patching / reboots for years now, and this has only ever happened a handful of times.

I log in to their SBS and attempt to ping the TS – no response.  The TS is a whitebox server that is about 4 years old and doesn’t have a remote access card or IP-KVM connected.  The client is in bed, and not having the TS really isn’t going to be an issue until their approx half-dozen remote users try to access Great Plains in the morning. So I didn’t bother calling to wake anyone up – instead, I more or less surprised the VP when I walked in at 7:30 this morning to address a problem they didn’t know they had yet. 

Short story was that the server was pretty much on its deathbed – the alarm LED on the case was coming on whenever the processor tried to do anything.  It took 5 attempts before I was able to get the server to boot, and when I did get logged in the CPU was grinding constantly with the error LED lit, but looking at the task manager I didn’t see anything out of the ordinary, besides the fact that the system was so slow it was virtually unusable for one user at the console, let alone a half-dozen plus remote TS users trying to use Great Plains.  Quick diagnosis & gut instinct told me this was a hardware issue.  Being a 4-year old whitebox, it was long out of warranty.  I knew the server just needed replaced, but the remote users couldn’t wait a week to 10 days for me to get approval, get a box ordered from Dell, and get it installed.  Additionally, for the state of the machine, it would probably take days to get an image-based backup using Shadow Protect, since this is a new customer and they aren’t backing up the TS since there’s no data on it.

SO – I ran back to my office and grabbed a spare PC I use for random stuff on my bench (Acer – about 3 yrs old, but has a dual-core Pentium CPU @ 2.8 GHz & has been upgraded to 2GB RAM).  I also grabbed my old Adaptec 1205SA PCI SATA host controller off the shelf and returned to the client.

The TS in question was running a RAID 1 array using an on-board SATA RAID controller.  I shut down the TS, and installed the Adaptec SATA controller in an open PCI slot, then after 4 tries the server finally booted again.  I logged in, the OS found the Adaptec SATA controller & I installed drivers from my thumb drive.  Once the driver installation completed successfully, I shut down the TS again.

I removed the Adaptec SATA controller & drive 0 from the TS.  I installed the Adaptec SATA controller in the Acer PC, inserted & connected drive 0 from the TS to the Adaptec SATA controller, then disconnected the existing SATA hdd in the PC.  I powered-on the PC, and since drive 0 was connected to the Adaptec SATA controller, AND the Win2k3 OS on drive 0 already had drivers for that controller installed, the Win2k3 TS OS booted successfully in (almost) completely different hardware.  On the first login, the OS detected the various new hardware (on-board boot controller, DVD drive, on-board NIC, etc.).  Once drivers for new hardware were installed & onboard NIC configured, I powered down the Acer PC, removed the Adaptec SATA controller card, & connected drive 0 to the on-board primary SATA port.  Powered on the PC – the Win2k3 OS again booted successfully, and we verified that remote users were able to successfully log in and launch Great Plains.

Obviously, using a 3-yr old desktop PC as a terminal server is not a long-term solution.  But – this minimized downtime for the remote users (having them all online before noon), and provided both myself & the customer with valuable breathing room / time to resolve the root issue and get the ball rolling on replacing this server.  And given the small number of users and basic Dynamics GP use, the performance of this temporary hardware is more than sufficient for the remote users (and beats the alternative smile_regular )

And yes, there is more than one way to skin a cat – and multiple ways this problem could have been addressed.  In this particular situation, I felt this was the best approach to get to a working system in the least amount of time possible, considering the severe instability of the original hardware, the lack of an existing image backup of the TS, and the fact that I could easily break the mirror to run off a single HDD from the server.

Migrating your SharePoint blog

As some of you may know, I assist Susan with administering & maintaining the blogs here at msmvps.com.  For various reasons, over the past few months I have become familiar with various approaches to blog migrations – most notably the BlogML project.  As a result, I’ve sort of become the neighborhood go-to guy for moving blogs, including assisting Steve Riley with his move from msinfluentials.com to wordpress.com

A couple weeks ago I was presented with an intriguing request / challenge.  My friend Wayne Small over at sbsfaq.com had been running his site and his blog on SharePoint for several years, but was now in the process of moving everything over to a single Word Press site, and wanted to migrate his content from his SharePoint blog.  The challenge wasn’t so much getting the content in to Word Press, since there are several importers available, including a BlogML importer.  The problem was getting the content out of SharePoint 3.0.  BlogML exporters for most platforms are web based, allowing you to initiate the export from within the blog platform, and download the resulting export file.  While I have coding experience, I don’t have any experience building add-ins for SharePoint and wasn’t about to open that can of worms, so I decided for a different approach.

For those of you who don’t know, there is rather impressive integration between SharePoint 3.0 & Access 2007, so I opted to use Access to extract the information out of Wayne’s old SharePoint blog.  This approach actually gave me more flexibility in meeting the various requirements:

  1. Where the SharePoint blog used Categories, Wayne wanted to use Tags in Word Press.
  2. We wanted to migrate all content – posts, comments, & embedded content (images in posts, etc.)

Moving categories to tags seemed simple enough, however I discovered that the the current 2.0 iteration of BlogML doesn’t support tags (which admittedly surprised me).  As a result, Aaron Lerch’s BlogML import class for Word Press did not support tags either.  Scoring the web, I found that Wayne John had updated Aaron’s BlogML import class to allow for importing tags from Blog Engine exports.  I did a quick & dirty install of Blog Engine on my sandbox server so I could examine its default BlogML export so I could match how it tagged its XML to identify post tags.

One of the major behind-the-scenes differences between SharePoint blogs & Word Press is how embedded content is stored.  When you are composing posts using an offline editor such as Windows Live Writer, inserted images are stored differently in each platform when the post is uploaded & published.  Word Press stores the images in the file system on the site, whereas SharePoint stores the images in the database as attachments to the post record.  Luckily for us, Access 2007 can handle attachments on SharePoint lists natively.  In early test runs, I found that there was some duplication in image names between various posts in Wayne’s blog (especially capture.png).  As a result, I decided to save the attachments for each SharePoint post to a different folder to avoid name collision issues.

So – how does the final solution look? 

  1. I created a new Access 2007 database, and used the External Data functionality to link to the Posts, Comments, & Categories lists on the SharePoint blog site.
  2. I created a simple form that allowed me to enter the path & filename I wanted for the resulting BlogML export file, as well as a path to where I wanted the embedded images from the SharePoint blog stored.  Obviously, the form contained a Start button as well . . .
  3. When the start button was clicked, the code behind the button did all the heavy lifting:
    1. It creates the BlogML export file using the path / filename listed on the form, and writes the various header information.
    2. We open a new recordset containing the Posts table.
      1. We call a helper function to format the post published date how the XML file wants it.
      2. We open a second recordset that contains the attachments for the current post we are processing
      3. If the current post has attachments, then:
        1. We save each attachment to the local file system, using the patch specified on the main form.  To prevent filename collision, we create a new subfolder for each post, using SharePoint’s numeric post ID as the folder name.  (So if we listed C:\export as the folder we wanted to save embedded images to on the main form, we would end up with something like C:\export\<post_id>\capture.png)
        2. For each attachment, we populate the <attachment /> node of the BlogML output.
        3. We parse the body of the current post and replace every path we find pointing to the old attachment path with the new attachment path.  E.g., links to embedded content in the SharePoint blog are referenced via “/Lists/Posts/Attachments/<post_ID>/<filename>” where once we import content in to Word Press, the path will be something like “/wp-content/uploads/<post_ID>/<filename>”  By updating the relative paths to our embedded content during the export, we can better insure that our embedded content will transfer seamlessly.  This isn’t the case with most BlogML exports, because they are simply exporting the content, not updating embedded links to match the target platform.
        4. We cycle through and repeat for each attachment for the current post.
      4. We write the post content to the Output file
      5. For each category listed in the Posts recordset, we write a <Tag /> to the output file (to match Wayne’s requirements). 
      6. We open a third recordset which contains all of the comments for the current post.  If the current post has comments, then:
        1. We call a helper function to format the comment date how the XML file wants it.
        2. We write the current comment to the output file
        3. We cycle through & repeat for each remaining comment for the current post.
      7. We cycle through and repeat for each post
    3. We write the closing tags to the output file and complete the process.

So in the end, we have one BlogML.xml file with all of the blog content (posts, comments, categories, tags, etc.), and a folder that contains all of the exported embedded images from the SharePoint blog posts (in separate sub-folders by post ID).

At this point, Wayne simply had to copy the embedded images subfolders to his /wp-content/uploads folder for his site, then run the BlogML import process to import the content generated.  Voila! 

For example:

Post on SharePoint blog:  SBS 2008 R2 I want it now!

Migrated post on WordPress blog:  SBS 2008 R2 I want it now!

Notice the embedded image is displayed as expected – and if you look at the image properties on the Word Press post, you’ll see it is referencing the relative path to the image on the Word Press site.  In addition, the original post date & author info has been maintained, as have all comments, with their original comment dates & author info.

One caveat that is important to share – when I was first starting to test the import process, the BlogML import in Word Press was failing with an error that invalid characters were encountered on line X at column Y.  However, opening the BlogML export file in notepad, wordpad, or IE – I couldn’t see anything that appeared to be an invalid character.  Finally, opening the BlogML output file in Visual Studio 2008 allowed me to see the invalid characters, and I was able to remove them with a simple find/replace in Visual Studio, after which the import process in Word Press completed successfully.  I’m not sure what caused these characters to be present in the first place – perhaps the authoring tool Wayne used, or perhaps something in the export process, pulling from SharePoint to Access to text. 

But anyway, that was one of my recent side-projects.  Let me know what you think – the Access database I used was rather rough around the edges and relied on a number of assumptions I was able to make which resulted in certain options being hard-coded.  If there is enough interest, I’ll polish it up a bit and post it for others to use to export their SharePoint blogs to BlogML.