Extending the web part framework – part 2

In part 1, I showed how we implemented a ‘toolbox’ of page templates and functionality modules wrapped up in a governance framework, to fulfil our client’s requirement of a flexible WCM platform for building 80-100 internet sites with varying requirements. In this post, I want to detail some of the issues we ran into and the resolutions we found, focusing primarily on the ‘module framework’ we developed which is heavily-oriented around SharePoint web parts. 

Quick recap

The client is a large multi-national enterprise, and the idea is that content authoring teams in 80-100 countries will take what we’ve delivered on MOSS to create their country’s internet presence e.g. .com, .co.uk, .fr, .es etc., replacing the existing mish-mash of sites on different technologies with inconsistent branding/look and feel.

In terms of the module framework, the cornerstones of our implementation were (see part 1 for more complete details on these):

  1. Module matrix – rules for which module can be used where, to guide authors away from building a user experience which doesn’t  ‘make sense’
  2. SmartPart-like approach, but with web part properties – web parts wrapping user controls but also supporting web part properties exposed in custom tool parts
  3. Base web part/base tool part class – responsible for ‘framework’ behaviour such as checking if the current web part can be added (according to the module matrix)
  4. Combine interface of publishing field controls with web part storage – since publishing field controls (e.g. RichHtmlField) must be added in a ‘static’ manner at design-time but our authors can add controls dynamically at run-time, we developed custom controls which combine the rich functionality of the publishing HTML editor with web part storage
  5. Control adapter for WebPartZone for accessibility compliance – to get round the problem of all the HTML tables generated by SharePoint’s web part framework, which will prevent a site validating for AA
  6. Present only our web parts in the web part picker – since standard SharePoint web parts are not used anywhere in these sites
  7. Remove unnecessary options when editing web part properties (tool parts) – to avoid confusing the authors

Issues and resolutions

I think that many of the challenges we faced are worth sharing as they came about through general web part development, rather than anything specific to what we did. Before I detail the actual gotchas, take note of some key development characteristics of our project:

  • Solutions and features used to deploy artifacts such as page layouts, content types etc.
  • Kivati Studio used for some other deployment aspects
  • Main functionality implemented in user controls – web parts were effectively thin wrappers around the .ascx files using LoadControl()
  • Web parts which are ‘mandatory’ are added to pages using the AllUsersWebPart element in a feature (though as the points probably illustrate, we looked at numerous ways of dealing with this)

Finding #1 – web parts outside of web part zones cannot be edited

The reason we wanted to have web parts outside of zones (perfectly possible by dragging a web part directly into page layout markup in SharePoint Designer) is for ‘fixed’ page modules which could not be removed by the content author. When we placed web parts outside of web part zones, we found the web parts would run fine in presentation mode but unfortunately cannot be edited (e.g. to edit web part properties) – the edit menu for the web part simply does not appear. I speculate this is because it is web part zones which are linked to web part storage, and thus web part properties cannot be persisted without a zone (the values in the markup will always be used). Hence, if you want editable web parts, you need web part zones.

Resolution – ensure all web parts (even ones which cannot be removed) live in a web part zone.

Finding #2 – embedding web parts into user control markup appears to be problematic

We tested various permutations of using web parts in/out of web part zones, and also with the HTML markup directly in the page layout .aspx or in a child .ascx file. After establishing that web part zones were required, we also found that whether the markup was in the .aspx or .ascx appeared to make a difference. This was unexpected, but the net effect seems to be that if you insert the web part markup into a web part zone which is in a user control rather than directly in the page layout .aspx (i.e. by refactoring the HTML markup for the web part zone and it’s contents into a user control), again the edit menu will not display. I’m not sure why this is, but it could be related to the page execution lifecycle.

Resolution – accept that if web part zones will have web parts added to them at design-time by markup, the web part zone declaration cannot be in a user control.

Finding #3 – when using AllUsersWebPart element, duplicate web parts appear if the feature containing your page layouts is reactivated

Having decided our ‘fixed’ web parts would be added to pages using the AllUsersWebPart feature element (N.B. using this approach, ‘default’ web parts are associated with page layouts in the feature which deploys them. Web part zones are left empty on the page layout, and SharePoint provisions the web part into the zone at the time of creating a page from the layout). The issue we had with this is that all the web parts in all the zones in existing pages would be duplicated if the page layout feature was reactivated – this is because this XML is used both when the feature is activated (in the same way as say, provisioning for content types happens on activation) but also when new pages are created from a page layout.

Resolution – write a script (a Kivati task in our case) to remove duplicate web parts across all sites

[UPDATE – Waldek has an elegant solution to this problem in ‘Preventing provisioning duplicate Web Part instances on Feature reactivation’, as well as sample code similar to what we wrote for our script. DOH!]

Finding #4 – duplicate web parts can also appear when the page layout is customized (ghosted)

I’m not exactly clear on the reasons why customized files would ever cause duplicate web parts to appear, but that’s certainly what we seemed to find. What happened is that we would deploy our master pages/page layouts using a feature to our QA environment, but immediately these files would be provisioned in that site as customized (i.e. the content in the content database), instead of being uncustomized and referenced on the filesystem. After further investigation, we traced the cause of this unexpected behaviour to the use of these attributes SPD adds to page layouts:

meta:progid=”SharePoint.WebPartPage.Document” meta:webpartpageexpansion=”full”

Resolution – ensure the version of the file does not contain these attributes. We actually switched to running uncustomized master page/page layouts even in our development farm. This means that we deployed the files using a feature and thereafter never opened them in SPD (editing only the source-controlled feature file instead).

Finding #5 – avoid setting default properties in the web part definition file (.webpart)

A final lesson we learnt is that, when working with web parts it’s often better to avoid using the .webpart definition file extensively for setting default property values. There’s nothing wrong with the mechanism – effectively these values are read whenever the web part is provisioned on a page, and your instance will set it’s properties to these values. The problem, of course, is when you realize a property value you defined in the .webpart file needs to be updated because something changed. What happens to all the existing instances on pages around your site? As you might guess, the answer is nothing – unless you take steps to update those also, which generally means writing some kind of script to use SPLimitedWebPartManager. This can be pretty inconvenient when all you wanted to do was quickly change a default value.

Resolution – consider ensuring .webpart files are stripped to the bare minimum (assembly name etc.) and configuration comes from somewhere else. We typically rolled these config items into our use of the Config Store.

Summary

We ran into a few unexpected gotchas when building on the web part framework, but steps can be taken to minimise their impact. Hope you find these useful if you do web part development. Special thanks to Karoly Szalkary for helping to refresh my memory on some of these!

P.S. After 2 years writing about it, I’ve decided I no longer need to capitalize the ‘f’ in ‘feature’ – I think we’re all on the same page on that one now 😉

Extending the web part framework – part 1

Today I want to show some of the interesting things we’ve been doing with web parts for one of our clients. There’s quite a lot to talk about so it will be over two articles:

  • Part 1 – background and implementation
  • Part 2 – issues and resolutions

There are a couple of things in particular which I think are quite cool, as we’ve effectively combined classic WCM (publishing) site functionality with a customized implementation of the web part framework. The context is a fairly large roll-out to an enterprise client, but what we’re rolling out is a centralized platform for 80-100 internet sites. The idea is that content authoring teams in 80-100 countries will take what we’ve delivered on MOSS to create their own sites – replacing the existing mish-mash of sites on different technologies with inconsistent branding/look and feel.

Clearly a key challenge here is satisfying the diverse needs of so many stakeholders. So a cornerstone of the platform is that sites can be tailored somewhat, so each country has some flexibility to communicate with their audience in the way they think is best. We effectively give the authors a set of page templates and building blocks, and a system which governs how the blocks can fit together so that the user experience will still ‘make sense’. Needless to say, a lot of analysis and consideration has gone into this – both in terms of what functionality was needed but also user journeys and navigation through the site, and the experience architects on our side (LBi) played a vital role here. There are many aspects to the project I could zone in on, but since I want to focus on the implementation details here, I’ll briefly list some of these building block requirements before showing how we did it.

Key requirements/challenges:

In order to create the different page types, we needed around 15 page layouts, including these:

  • Home page
  • Channel hub/Alternative hub/Sub home – these are different template options for ‘2nd and 3rd level’ pages 
  • Content page
  • Product page
  • Media release
  • List – provides links to a series of related pages
  • Etc.

And whilst some aspects of page functionality was ‘fixed’ on the template, there were many other items which were optional – these were to be added to pages by the authors, either in a ‘web part’ kind of way or perhaps something else. Some examples of these optional ‘page modules’ were:

  • ‘Hero’ feature – used to highlight something on prominent pages with an image/flash/text
  • Right-hand promo
  • Content editor module – allows an author to enter arbitrary content, but for reasons which will become clearer we developed an interesting custom control which is kind of a cross between a publishing HtmlField and a Content Editor web part (covered later)
  • Generic content module – rolls-up formatted content/links to a selected page
  • List/tabbed list – provides links to a series of related pages
  • Dynamic share price – displays latest stock price based on web service call
  • Product selector – using AJAX cascading dropdowns to filter products
  • Etc.

Although there were lots of other challenges (such as multi-lingual content, packaging/documenting every deployment aspect so the hosting company could deploy etc.!), I felt that building the ‘framework’ could be more challenging than individual functionality bits. To help frame what you’ll read next, some initial questions we had for the implementation were:

  • How do these optional bits of functionality get added to the page? As web parts, or something else?
  • How do we get accessibility-compliance if web parts?
  • How do we provide configuration if not web parts?
  • How do we restrict which modules can be used where (as per the specification)?
  • Since we’re in a ‘flexible’ publishing site, how do we determine which fields are needed on the content types? Does each content type need to have all the possible fields the author might choose to add?
  • If we are working with publishing controls, how would we bind the dynamically added control to the ‘back-end’ publishing field on the content type?

The implementation

As well as the optional page modules, most of the templates had a classic set of publishing fields. After looking at custom approaches, we concluded the web part framework had a lot going for it for the optional stuff – clearly we could avoid building a user interface to pick the module from a list/add to page/allow configuration of properties specific to the module, and also get drag and drop (amongst other things) as an added bonus. The concept of web part zones – as a container where one or more modules could be placed – was also important to our page structure.

Another challenge for the optional modules was where to store the data. If they were publishing fields, we would need every possible module to have a corresponding field on every possible content type, and this was pretty impractical when looking at the spec. Web parts, of course, use a different model and the framework takes care of data storage regardless of how many controls are on the page.

On the downside, a key thing to remember with web parts in publishing pages is that web part data is (by definition) not stored in publishing fields, and therefore isn’t versioned in the same way. After discussing with the client, in our case this proved to not have as big an impact as we initially thought, due to the split and nature of what content would be stored in publishing fields vs. what would be web parts. So, having the client’s acceptance of this trade-off, we went with web parts and came up with these solution elements:

  1. Module matrix

    This comprised two SharePoint lists which contained the ‘rules matrix’, to enforce the design team’s specification of what functionality could be used on which page type. Effectively the data provides the mapping of modules and page layouts. Being list data, it meant that it could be easily updated by the central team if a policy change was required. This data was consumed by our base web part (point 4).

  2. SmartPart-like approach, but with web part properties

    We wanted the actual functionality of our web parts to be implemented in user controls, for the typical reason of avoiding building HTML in C# code (wrong on so many levels!). This is obviously what the SmartPart does using LoadControl(), but we had the additional requirement of needing to pass web part property values to our user controls – this meant we could use the familiar ‘tool part’ interface (i.e. setting web part properties in the right-hand pane) for control configuration. 

    In our model, each user control has a corresponding ‘wrapper’ web part/tool part which understands which properties are required and how to build the properties UI. In the web part’s OnInit() method, values are passed from the web part properties to the user control so that the latter is initialized ready to do it’s processing.

  3. Base web part/base tool part

    All our web parts/tool parts were derived from our custom classes which abstracted some responsibilities. Since we couldn’t easily change the web part picker screen to only display appropriate web parts for the zone the author had selected, we built the check into the base web part – if an ‘invalid’ web part was added, the web part renders nothing in presentation mode but in edit mode we display a message to the author like this:

    ModuleNotValidMessage 

    Adding too many web parts to a zone (count determined in the module matrix data) would have a similar effect.

  4. Combine interface of publishing field controls with web part storage

    Having decided to use web parts for our control architecture, we had one requirement for something similar to the standard Content Editor web part (CEWP). However, this control is pretty lame compared to the MOSS publishing HtmlField, and we quickly established our client needed more than the basic CEWP. So we combined the bits we wanted from both – the front end control used by the publishing field type (the RichHtmlField control), but the backing store of web part storage rather than a publishing field. This meant authors could add multiple instances of this optional module to their page (and get the nice editing experience), but because it’s a web part we didn’t need to worry about having a corresponding set of fields on each possible content type. In code/integration terms it’s the same approach, but in the end we actually swapped the standard MOSS control for the control which fronts Telerik’s RADEditor field since the client wanted to move to this: 

    CustomContentEditorWebPart  

    Also note use of another control typically used with publishing fields here, the AssetUrlSelector – this provides the ‘Browse…’ button shown above, and can be used to provide a friendly way for an author to browse to a file.

  5. Control adapter for WebPartZone for accessibility compliance

    Since web parts normally render with a stack of nested HTML tables which won’t validate against AA, action needs to be taken to remedy this if accessibility is a design goal. However this isn’t necessarily a big deal – the approach is that you ‘correct’ the HTML for the WebPartZone control in presentation mode only, thus leaving the tables intact in edit mode for all the web part editing framework stuff which needs to happen. You do lose the client-side Web Part Services Component (WPSC) API doing this, but we had no requirement for it anyway (I rarely see it used). I initially assumed I’d have to write a control adapter to do this, but I found that David Schneider has already done the job – this works fine. It’s also possible the latest version of the AKS has one, can’t remember if I checked.
  6. Present only our web parts in the web part picker

    Since this is a highly bespoke WCM platform rather than a standard collaboration environment, we don’t want to see any of the standard web parts in the picker for these sites. Two steps to this one:

    – delete all the .webpart files from the web part galleries in the sites (N.B. we used Kivati for rolling out such changes across all the site collections – more on this in the future). However, doing this will still leave you with ListView web parts for all the lists/libraries in your site, so you also need to..
    – ensure all your WebPartZone declarations have the little documented ‘QuickAdd-ShowListsAndLibraries’ property set to false:

    <WebPartPages:WebPartZone id="g_AB07678E486C46bc962DFC8446A6CD13" runat="server" title="Zone 1" QuickAdd-ShowListsAndLibraries="false" />

    Authors are then not confused by any standard web parts which aren’t appropriate for our scenario:

    StrippedWebPartPicker

  7. Remove unnecessary options when editing web part properties (tool parts)

    Finally, we do a bit of work with the accompanying tool parts (for properties editing) for our web parts to avoid confusing our authors with options which won’t take effect. As an example, for a web part which looks like this in presentation mode:

    ProductSelectorModule 

  8. The tool part looks like this:

    ProductSelectorToolPart

    In case you’re wondering what to look at, it’s that we’ve removed the standard options SharePoint would normally provide for every web part (such as chrome style etc.), since we want to control these to ensure proper formatting. Normally we’d have these sections at the bottom of the tool part:

    RemovedToolPartOptions

Summary

There are many ways SharePoint’s web part framework can be extended, and here I’m only showing the path we followed. For a requirement such as our client’s, web parts provided a great starting point, perhaps showing there can sometimes be a place for web parts in an accessible publishing site so long as the trade-offs are understood and accepted.

In part 2 of this series we’ll look at issues encountered and their resolutions.

A better Config Store for SharePoint sites

I’ve recently made some enhancements to my Config Store framework on Codeplex which I’m now ready to share. Many of these enhancements are a result of adapting the solution to a large project where we built a platform for 80-100 internet sites – hence it’s now become a bit more ‘enterprise’. With such things I generally make a deal with my employer where I do the work (or most of it) in my spare time and then get to share the code publicly, so here we go. Before I delve into the details, let’s have a reminder of what the Config Store is all about:

Recap – the SharePoint Config Store in a nutshell

Regular readers may remember this is a solution which allows use of a SharePoint list to store configuration values used by your SharePoint application – the idea is that your webparts/server controls/page layouts etc. store any strings/data they need for configuration in here as a more flexible alternative to web.config or similar. Since our values are now in a SharePoint list, configuration can be updated across the farm through the browser to administrators who have the appropriate permissions. We can also optionally take advantage of all the other things lists give us such as auditing, item-level security, alerts and version history etc. Finally, a caching layer is used to avoid round trips when your code fetches configuration values.

On the last couple of projects where my team has used it, we’ve finished up with 100+ config items in the list – storing all sorts of things from URLs, strings and ‘application behaviour’ switches:

ConfigStore

If you need a more complete overview, see:

Enhancements in the new release

  1. Optional "hierarchical" configuration model similar to web.config

    In the first release, since all the config values are stored in one list this means your configuration is stored in one ‘nominated’ site collection. This is fine for WCM sites which may only use one site collection, but may not easily map to certain enterprise requirements. What’s new in this release is that the framework can now use a ‘hierarchical’ model, where a Config Store list exists in whatever site collections should have one, but a ‘master’ site collection is nominated which contains the ‘master’ config values. What happens is that if the config item you request is in the ‘local’ Config Store you’ll get that value, and if not you’ll get the value from the master list. This allows local overriding of the parent values if required – in practice we found 95% of the config items would be stored in the master list only, but having the facility to override the other 5% was critical to supporting some of the functionality we developed.

    Needless to say, this is the implementation which best suited our requirements. It could be it doesn’t really suit yours, but developers could consider starting with my source code and modifying, since other aspects such as the caching layer, Feature files, event handlers etc. might not need to be changed much.

    If you want to continue to just use one Config Store list (even if you consume the config values in multiple site collections), the new code will continue to work just fine for this model too.

  2. Easier to use in ASPX markup

    Similar to my recent Language Store framework, the Config Store now supports easily dropping values into ASPX markup by implementing an expression builder. So if all you want to do is set the Text property of a control, you can now do with this without cluttering up the code-behind. Or, as an alternative example, here we’re retrieving an URL from the Config Store (stored in Category ‘PageUrls’ and key ‘MyAccountPage’) to assign to a hyperlink:
    <asp:HyperLink id="hyperlink1" NavigateUrl="<%$ SPConfigStore:PageUrls|MyAccountPage %>"
    Text="My account" ImageUrl="images/pict.jpg" runat="server"/>

  3. Fixed caching bug for farm environments

    Yes, something I have to hold my hands up to here – the caching layer in the initial release didn’t adequately deal with multiple servers. The effect was that it required an app pool recycle to pick up changes to config values, rather than them taking effect immediately. Not the end of the world, but certainly inconvenient and taking away some of the benefits of using a SharePoint list for configuration. So in this release, the implementation correctly relies on a back-end resource (using a CacheDependency) to invalidate the cache across all servers in the farm. The implementation I chose was to add the items to the cache with a CacheDependency on a text file – this needs to be located on a path which all servers can access – and the event handler now updates a timestamp in the file to invalidate the item in the cache across all servers.

    I went with using a file as the dependency item as I thought it was more lightweight than using a SqlCacheDependency – I didn’t really want to impose the SQL configuration etc. to be able to use the Config Store. However, with the file cache dependency I’ve chosen, be aware that some production configurations may have internal firewalls which prevent all WFEs from accessing a shared file in this way – check before you deploy. Of course the code is there for you to modify should you wish to make changes in this area.

  4. Amendment to Feature to prevent web.config modifications being made on Feature activation

    Another thing that got in the way when I tried to implement the Config Store on our enterprise project was the Feature event receiver which adds the required appSettings keys to web.config. The idea here is that, to help simplify installation and initial setup, the required web.config entries are added with empty values so all the developer has to do is plug in the appropriate values for his/her site. Sounds great – and it is for single site collection sites. But for multiple site collections, when we come to activate the Feature in the 2nd, 3rd, nth site collection – of course the receiver runs and adds the empty entries to web.config again, despite the fact that we’d already inserted the real values on the first activation. And since the new values come later in the file, guess which ones are used?

    So in this release there’s a Feature property which determines if web.config modifications are made – it’s set to ‘False’ by default but it’s there if you prefer to change it.

  5. ‘Config value’ column is now bigger

    Previously this column was a ‘single line of text’ but it’s now a ‘note’. This means you can happily store HTML/XML fragments or other larger values, which is very useful in some scenarios. However, note that if you’re already using the Config Store on your site and want to ‘upgrade’, this schema change is significant – I discuss it in the readme.txt file.

So that’s it. You can download the updates from www.codeplex.com/SPConfigStore – hope you find it as useful as we have.

Using .Net Expression Builders to set control properties

In my last post I introduced my Language Store solution for multi-lingual SharePoint sites, and showed the two ways it can be used:

In standard .Net procedural code:

string sButtonText = LanguageStore.GetValue("Search", "SearchGoButtonText");

Declaratively in HTML:

<asp:Button runat="server" id="btnSearch" Text="<%$ SPLang:Search|SearchGoButtonText %>" />

This declarative syntax is very useful as it means the developer doesn’t have to clutter up code-behind files just to call the method to retrieve a value, then assign it to the ‘Text’ property of various controls. I’ve also retrofitted this to my Config Store solution (along with some other enhancements) and this will be available on Codeplex soon. You might notice it’s the same syntax as the SPUrl token which can be used in master pages/page layouts to get a relative path to an image or CSS file, and that’s because I’m using the same .Net technique. Since I had to do some digging to work out how this was done, I’m guessing (could be wrong here!) many other developers haven’t come across this either, so here’s how it’s done.

Implementing an expression builder class

An expression builder is essentially a class which derives from System.Web.Compilation.ExpressionBuilder and contains logic to evaluate an expression at page parse time. The ‘secret’ is that the ASP.Net page parsing engine understands that it needs to call the class’s method whenever it encounters an expression in the appropriate form. These are the things that join this mini-framework together:

  • Class derived from ExpressionBuilder which overrides the EvaluateExpression() and GetCodeExpression() methods
  • Declaration in web.config which associates your prefix (‘SPLang’ in my case) with your expression builder class
  • Optional use of ExpressionPrefix attribute on class for designer support
  • An expression in declarative HTML (as per the example above)

Taking things step-by-step, here’s what my class looks like:

[ExpressionPrefix("SPLangStore")]
public class LangStoreExpressionBuilder : ExpressionBuilder
{
private static TraceSwitch traceSwitch = new TraceSwitch("COB.SharePoint.Utilities.LanguageStore",
"Trace switch for Language Store");

private static LangStoreTraceHelper trace = new LangStoreTraceHelper("COB.SharePoint.Utilities.LangStoreExpressionBuilder");

public static object GetEvalData(string expression, Type target, string entry)
{
trace.WriteLineIf(traceSwitch.TraceVerbose, TraceLevel.Verbose, "GetEvalData(): Entered with expression '{0}'.",
expression);

string[] aExpressionParts = expression.Split('|');
string sCategory = aExpressionParts[0];
string sTitle = aExpressionParts[1];

if ((aExpressionParts.Length != 2) || (string.IsNullOrEmpty(sCategory) || string.IsNullOrEmpty(sTitle)))
{
trace.WriteLineIf(traceSwitch.TraceError, TraceLevel.Error, "GetEvalData(): Unable to parse expression '{0}' into " +
"format 'Category|Title' - throwing exception.",
expression);

throw new LanguageStoreConfigurationException("Token passed to Language Store expression builder was in the wrong format - " +
"expressions should be in form Language Store Category|Item Title e.g. Search|SearchGoButtonText");
}

string sValue = LanguageStore.GetValue(sCategory, sTitle);

trace.WriteLineIf(traceSwitch.TraceInfo, TraceLevel.Info, "GetEvalData(): Retrieved '{0}' from Language Store.",
sValue);

trace.WriteLineIf(traceSwitch.TraceVerbose, TraceLevel.Verbose, "GetEvalData(): Returning '{0}'.",
sValue);

return sValue;
}

public override object EvaluateExpression(object target, BoundPropertyEntry entry,
object parsedData, ExpressionBuilderContext context)
{
return GetEvalData(entry.Expression, target.GetType(), entry.Name);
}

public override CodeExpression GetCodeExpression(BoundPropertyEntry entry,
object parsedData, ExpressionBuilderContext context)
{
Type type1 = entry.DeclaringType;
PropertyDescriptor descriptor1 = TypeDescriptor.GetProperties(type1)[entry.PropertyInfo.Name];
CodeExpression[] expressionArray1 = new CodeExpression[3];
expressionArray1[0] = new CodePrimitiveExpression(entry.Expression.Trim());
expressionArray1[1] = new CodeTypeOfExpression(type1);
expressionArray1[2] = new CodePrimitiveExpression(entry.Name);
return new CodeCastExpression(descriptor1.PropertyType, new CodeMethodInvokeExpression(new
CodeTypeReferenceExpression(base.GetType()), "GetEvalData", expressionArray1));
}

public override bool SupportsEvaluate
{
get { return true; }
}
}

If you’re wondering why two methods are required, it’s because GetCodeExpression() is used where the page has been compiled and EvaluateExpression() is used when it is purely being parsed. My code follows the MSDN pattern which supports both modes and uses a third helper method (GetEvalData()) which both call into. It’s this GetEvalData() method which does the work of parsing the passed expression and then using it to obtain the value – in my case the expression is the ‘category’ and ‘title’ of the item to fetch from the Language Store. Notice that effectively, the key line in all of that is one line in GetEvalData() which calls my existing LanguageStore.GetValue() method – so effectively my expression builder is just a wrapper for this method.

My web.config entry looks like this:

<add expressionPrefix="SPLang" type="COB.SharePoint.Utilities.LangStoreExpressionBuilder, COB.SharePoint.Utilities.LanguageStore, Version=1.0.0.0, Culture=neutral, PublicKeyToken=23afbf06fd91fa64" />

And finally here’s how the component parts of the expression get used:

ExpressionBuilderSyntax

For SharePoint solutions, assuming we’re deploying our code as a Feature/Solution we’d generally want to add the web.config entry automatically by way of the SPWebConfigModification class. You can find the code to do this in on Codeplex in the source code for my Language Store solution (in the Feature receiver).

Finally, if you’re building an expression builder and this information doesn’t get you all the way, the MSDN documentation for the ExpressionBuilder class has some additional details.

Enhancing the design-time experience with a custom expression editor

This is something I haven’t looked at yet, but looks extremely cool! If the standard expression builder stuff wasn’t convenient enough for you, you can extend things further by providing a custom ‘ExpressionEditor’ for use in Visual Studio. If I understand the possibility correctly, this can provide a better experience in the VS properties grid in two ways:

  • a custom editor sheet (e.g. a dialog to enter the ‘category’ and ‘title’ of the Language Store item – let’s say ‘Search’ and ‘SearchGoButtonText’ was entered respectively, this would ‘build’ the string in the correct delimited ‘Search|SearchGoButtonText’ form required)
  • a custom picker using the Expressions collection – this could (I think) be used to query the Language Store list and display all the items, so that selecting the item to display the translation of is as simple as a few clicks, no typing!

I’d absolutely love to implement this for the Language Store/Config Store – so I might return to this at a later date!

Conclusion

Expression builders provide a powerful, clean way to inject method calls into your markup. In most cases we’re used to seeing them return strings as in my Language Store/Config Store implementations, and note that the following implementations are also present in the .Net framework:

However, one final thing to bear in mind is that the signature of the method returns an object – so theoretically it should be possible to do a whole host of other things, where the processsing returns a more complex object which gets assigned to the control property. An example could be data-binding scenarios where your method returns something which implements IEnumerable/IList – this could then be assigned to the DataSource property of your control declaratively. You might have other possibilities in mind, but hopefully that’s food for thought 😉

Building multi-lingual SharePoint sites – introducing the Language Store

If you’re ever asked to build a multi-lingual site in SharePoint, it quickly becomes apparent that there are a few extra considerations compared to a single language site. These could include:

  • Information architecture
  • Language/culture detection
  • Deciding whether to use variations or not
  • URL strategy

..and so on. Clearly these are decisions which will have a different ‘answer’ for every multi-lingual project, and typically your client’s specific requirements will steer your approach. However one challenge which is likely to remain constant across most such projects is this one:

  • How to deal with the many small strings of text which are not part of authored page content which need to be translated and displayed in the appropriate language

This is the challenge I’m focusing on here. To illustrate, here’s an example from the BBC site where I’ve highlighted all the strings which may need to be translated but which don’t belong to a particular page:

BBCExample

..and that’s just one page – it turns out a typical site will have many of these. If you have to translate additional strings shown to authors in edit mode only, you could easily find the total number stretching into the hundreds. So we start to need a framework for storage/retrieval of these ‘page furniture’ items. If we were dealing with a shrink-wrapped product, .Net resource files could be a good choice, but this approach is probably not flexible enough for a website and won’t allow content authors/power users to enter translations. Clearly something based around a SharePoint list is called for, so enter the ‘Language Store’ – my solution to the problem which you can now download from Codeplex (link at the end).

Introducing the Language Store

The Language Store is an adaptation of my earlier Config Store solution and follows some of the same principles:-

  • values are stored in a SharePoint list
  • an API is provided to retrieve values with a single method call
  • a caching framework is used for optimum performance
  • easily deployed as a .wsp

Items in the list are categorized, and have a column for each language translation:

LanguageStoreListNarrow

Note that each translation column in the list is named with the convention ‘LANG_<culture name>‘ (N.B. you might know a ‘culture name’ as a ‘locale ID’ or similar) – so when a new language needs to be added to the site, you simply create a new column with the appropriate name and add the translations. A list of culture names can be found in the MSDN documentation for the CultureInfo class.

Retrieving values

To retrieve a value, we simply call the GetValue() method and pass the category and title of the item to retrieve:

string sButtonText = LanguageStore.GetValue("Search", "SearchGoButtonText");

Also, since many of the items we might put in the Language Store are only used in the presentation of the page, it’s often a shame to have to switch to the code-behind just to fetch these values and assign them to a control’s ‘Text’ property. So I’ve provided a tokenized method similar to SPUrl, which allows you to simply drop Language Store values into your markup like this:

<asp:Button runat="server" id="btnSearch" Text="<%$ SPLang:Search|SearchGoButtonText %>" />

I like this because it means you don’t end up cluttering your code-behind with lots of lines just for fetching values from the Language Store and assigning them to ASP.Net labels or controls. For those who don’t know how this is done I’ll write more about it in the next post as I think it’s a cool, under-used facility in .Net.

How the Language Store determines which language to retrieve

In the current implementation, the regional settings of the SPWeb are used to determine which translations are retrieved. It’s a single method in the code (a single line in fact!), so this scheme could easily be changed if you have a different requirement. We’re using the Language Store on our current project, and using the SPWeb setting makes sense for us since we’re building around 100 different sites in ~30 languages, as opposed to one site which displays in the local language (according to the user’s thread culture or similar).

Note that if the Language Store doesn’t contain a value for the requested culture, a fallback process is used similar to .Net’s globalization framework:

  1. Check preferred culture  e.g. fr-CH for French (Switzerland)
  2. Check preferred culture’s parent e.g. fr for French
  3. Check default language (determined by configuration) e.g. en for English

This is useful where some items might have a different version for say American English (EN-US) and British English (EN-GB), but other items don’t require distinction so a single value can be entered into the parent column (EN).

By the way, if you’re wondering where the SPWeb regional settings are because you’ve never needed to change them, they’re here:

RegionalSettingsLink

RegionalSettingsPage

A checkbox allows the regional settings you make on a given web cascade down to child webs, so we simply set this at the root of the site as a one time operation.

Other bits and pieces

  • All items are wrapped up in a Solution/Feature so there is no need to manually create site columns/content types/the Language Store list etc. There is also an install script for you to easily install the Solution.
  • Caching implementation is currently based around a CacheDependency on a file – this enables the cache on all servers in your farm to be invalidated when an item is updated, but does require that all WFEs can write to this location (e.g. firewalls are not in the way).
  • The Language Store can also be used where no SPContext is present e.g. a list event receiver. In this scenario, it will look for values in your SharePoint web application’s web.config file to establish the URL for the site containing the Language Store (N.B. these web.config keys get automatically added when the Language Store is installed to your site). This also means it can be used outside your SharePoint application, e.g. a console app.
  • The Language Store can be moved from it’s default location of the root web for your site (to do this, create a new list (in whatever child web you want) from the ‘Language Store list’ template (added during the install), and modify the ‘LanguageStoreWebName’/’LanguageStoreListName’ keys which were added to your web.config to point to the new location. As an alternative if you already added 100 items which you don’t want to recreate, you could use my other tool, the SharePoint Content Deployment Wizard at http://www.codeplex.com/SPDeploymentWizard to move the list.)
  • All source code and Solution/Feature files are included, so if you want to change anything, you can.
  • Installation instructions are in the readme.txt in the download.

You can download the Language Store and all source code from www.codeplex.com/SPLanguageStore. All feedback welcome!

Workflows not being associated with lists/libraries

I don’t often do "a funny thing happened to me on the way to configuring a SharePoint farm recently"-type posts, but I’m making an exception here – this issue not only took a bit of figuring out, it’s also kind of interesting! This is partly because SharePoint doesn’t do what you’d expect it to do under the circumstances, and you could probably chase your tail for a while on this one if you didn’t think it through. 

Scenario:-

  • Workflow is not enabled on any of the lists/libraries in the site in production – specifically, there is no association so the workflow list is empty
  • The workflow we were expecting to see enabled is the standard publishing approval workflow – since we are in a WCM/publishing site scenario, this would be associated with each Pages list in the hierarchy
  • In our case, initial configuration had just been completed by the hosting company and we were asked to start our testing. Search configuration has not yet been performed
  • Previous environments did not display this behaviour
  • A custom site definition is used to create any site/web – this is what determines the approval workflow should be associated

At first it seemed like an issue with the site definition. I was thinking that the hosting company had missed a deployment step to STSADM upgradesolution on the .wsp containing the site definition, but accessing the servers showed the latest files in place. I knew that because search configuration hadn’t been done yet, the hosting company hadn’t yet created the SSP – this was their next task. I couldn’t initially see a link between the SSP and workflow (since SharePoint workflows execute in the w3wp.exe process), but then I started wondering if there was any indirect link, and came up with this as a chain of dependencies:

 image

Could it be that the sites didn’t have workflow enabled because the SSP wasn’t there at time the sites were provisioned? Sounds plausible, but then I thought hang on – surely what would happen is that workflow associations and so on would be there as usual, but when a workflow form is accessed it would error with the familiar message that session state must be configured:

InfoPathSessionError

Well, as you might have gathered, the answer is no, this isn’t what happens! SharePoint genuinely will not (or cannot) add the workflow associations to your lists/libraries, and furthermore they will not magically appear when your SSP is created. You will need to go back and configure workflow on each list/library (or content type) either through the UI or with code.

So the moral of the story is:

Create your SSP before any sites are provisioned!

Simple data access pattern for SharePoint lists

So, you’re in the early stages of your project and coding has started. It’s already becoming apparent that some abstraction is needed for accessing data in a few key lists, but you’re not sure what. Of course there’s no ‘one-size-fits-all’ answer to this question, but let’s run through some options:

In my mind the best answer to this question is probably one of the first two – particularly on projects which are building something like a traditional application (i.e. with a domain model and entitities which are tied to data) on top of SharePoint, as opposed to projects which are doing say, simple WCM. However, like me, maybe you’ve still not spent significant time looking at LINQ4SP (and maybe have memories of the original LinqToSharePoint incarnation not quite making it to maturity) and are either in the same position with the P&P samples or are thinking it looks a little too enterprise for your current needs. And on a small project with a short dev cycle, I wouldn’t blame you.

So onto other options – in terms of building your own DAL, well of course it’s possible, but if you’re on a smallish project then presumably finding the time to hand-code an abstraction for all your data needs is going to be tricky. Additionally I’m not too fond of this approach – I’ve just seen it done poorly by others too many times, resulting in a layer which performs badly/doesn’t scale because objects aren’t disposed correctly/doesn’t provide the convenience it was intended to. And finally, as far as the ‘do nothing’ option goes, although we might not be striving for the ultimate pattern we are trying to avoid the duplication, inconsistency and maintenance nightmare that could come if we allow each developer on the project to work with the data how they like.

My suggestion:-

What I’ve used a couple of times is an ‘in between’ approach – specifically, in between doing nothing and building a complex DAL. What I’m showing here isn’t exactly what I’ve used (made a couple of ‘improvements’ as I was tapping it out!), but I think of it as a very quick, lightweight pattern which at least helps reduce the worst of the problems:

public static class Employee
{
public static readonly string ListName = "Employees";

public static class Fields
{
public static string PersonTitle = "Person_x0020_title";
public static string FirstName = "First_x0020_name";
public static string LastName = "Last_x0020_name";
public static string StartDate = "Start_x0020_date";
public static string Division = "Division";
public static string ID = "ID";
public static string Salary = "Salary";
}

/// <summary>
/// Fetches employee list item - notice that caller has to supply SPWeb object, but all other implementation
/// details (list name, fields names) are taken care of..
/// </summary>
public static SPListItem GetEmployeeListItem(int EmployeeID, SPWeb Web)
{
SPListItemCollection queryResult = executeEmployeeLookup(EmployeeID, Web);

return queryResult[0];
}

/// <summary>
/// Fetches employee DataRow for 'read-only' situations, or where we want to cache/serialize etc.
/// </summary>
public static DataRow GetEmployeeDataRow(int EmployeeID, SPWeb Web)
{
SPListItem employeeItem = GetEmployeeListItem(EmployeeID, Web);
DataRow drEmployee = SPListItemCollectionHelper.CreateDataRowFromListItem(employeeItem);

return drEmployee;
}

/// <summary>
/// Private method which does actual query.
/// </summary>
private static SPListItemCollection executeEmployeeLookup(int employeeID, SPWeb web)
{
SPList employeeList = web.Lists[Employee.ListName];

// query employee list..
SPQuery employeeQuery = new SPQuery();
employeeQuery.Query = string.Format("<Where><Eq><FieldRef Name=\"{0}\" /><Value Type=\"Text\">{1}</Value></Eq></Where>",
Employee.Fields.ID, employeeID);

SPListItemCollection employees = employeeList.GetItems(employeeQuery);

return employees;
}

/// <summary>
/// Example of another method related to employees.
/// </summary>
public static SPListItemCollection FetchAllHiresSince(DateTime StartDate, SPWeb web)
{
SPList employeeList = web.Lists[Employee.ListName];

// query employee list..
SPQuery employeeQuery = new SPQuery();
employeeQuery.Query = string.Format("<Where><Geq><FieldRef Name=\"{0}\" /><Value Type=\"Text\">{1}</Value></Geq></Where>",
Employee.Fields.StartDate, SPUtility.CreateISO8601DateTimeFromSystemDateTime(StartDate));

SPListItemCollection employees = employeeList.GetItems(employeeQuery);

return employees;
}
}

Points of note:

  • We’ve centralized the repository details of employee data such as list name and field names
  • Getting hold of an Employee list item is now one line for the caller, and they don’t need to know how to find the data
  • An SPWeb object must be passed, meaning the caller has responsibility for obtaining and disposing – more on this later
  • The caller has a ‘get as DataRow’ method – you might feel this isn’t needed, but I think it can be a useful API function. In contrast to an SPListItem, a DataRow is completely disconnected from SharePoint and therefore there are no unmanaged SPRequest objects hanging off it which need to be disposed. This means it can be cached/serialized etc., and additionally can be passed around a deep stack of methods without the calling code having to be responsible for disposals (which by the way, would need to be done in a Finally block so the disposals happen even if an exception occurs)
  • For the ‘get as DataRow’ method, a helper class is used to translate from SPListItemCollection to DataTable and SPListItem to DataRow – this is principally because the existing SPListItemCollection.ToDataTable() method has a bug. (N.B. This is intentionally not in the form of extension methods for clarity!)
  • The class is not an instance class, so we don’t do anything like wrap each list field with a property. This might not be as convenient for the caller, but means we don’t have to worry about whether the data item is ‘dirty’
  • Update operations are left to the caller

This means the calling code (in a SharePoint web context) looks something like this:

public void GetReadOnlyEmployee()
{
DataRow drEmployee = Employee.GetEmployeeDataRow(56, SPContext.Current.Web);

// do something/pass happily around codebase..
}

public void GetEmployeeForUpdate()
{
SPListItem employee = Employee.GetEmployeeListItem(56, SPContext.Current.Web);
employee[Employee.Fields.Salary] = 50000;
employee.Update();
}

To me, this approach has a good "bang for buck" because it’s extremely quick to implement but does make the situation drastically better in team development. 

Passing the SPWeb object is key:-

In my article Disposing SharePoint objects – what they don’t tell you, I highlight the difficulties of keeping track of objects to dispose in a complex class library. However, I’m now starting to see the code I used as an example there as something of an anti-pattern.  The simplified code sample I used to demonstrate the problem was:

public void DoSomething()
{
bool bDisposalsRequired = false;

// get list from SPContext if we have one..
SPList list = getListFromContext();
if (list == null)
{
// otherwise get list from objects we create..
list = getInstantiatedList();
bDisposalsRequired = true;
}

// do something with list..
foreach (SPListItem item in list.Items)
{
processItem(item);
}

if (bDisposalsRequired)
{
list.ParentWeb.Dispose();
list.ParentWeb.Site.Dispose();
}
}

private SPList getInstantiatedList()
{
// can't dispose of these objects here if we're returning a list - we'll be attempting to use
// objects which have already been disposed of..
SPSite site = new SPSite("http://cob.blog.dev");
SPWeb web = site.OpenWeb("/MyWeb");
SPList list = web.Lists["MyList"];

return list;
}

private SPList getListFromContext()
{
SPContext currentContext = SPContext.Current;
SPList list = null;

if (currentContext != null)
{
list = currentContext.Site.AllWebs["MyWeb"].Lists["MyList"];
}

return list;
}

In this structure, internal API code (i.e. the DoSomething() method) is responsible for obtaining the SPWeb object needed to find the list, which generally means it is also responsible for it’s disposal. And this is where the difficulties arise. However, if the caller has to provide the IDisposable objects, it can then be responsible for calling Dispose() because it knows when they are no longer needed. Most often the caller will simply be passing a reference to SPContext.Current.Web, but if the caller happens to be a Feature receiver then SPFeatureProperties.ListItem.Parent.Web would be passed, or if no existing SPWeb object was available to the caller (e.g. console app) a new SPWeb object would have to be instantiated and passed. Disposals then become much simpler because they can always happen in the same place as any instantiations.

Conclusion:-

I’m sure there are better patterns out there, but even a simple approach like this provides much more structure than leaving it all to the caller (particularly when the calling code is being written by different developers!). For me the key thing is to standardize how code in your codebase accesses list data – much better than having field and list names dotted here, there and everywhere. One thing the approach doesn’t specifically consider is unit-testing – the sample projects in the Patterns & Practices stuff look useful here, and I for one will be getting better-acquainted!

P.S. Thanks to Rob Bogue for helping me crystallize these thoughts, and apologies to Sezai Komur for not getting round to mailing a draft through earlier as I promised!

My top 5 WCM tips presentation

Had a great time presenting my WCM tips presentation over the last week or so – first to a record attendance at the UK SharePoint user group (we hit 200+ attendees for the first time!) and then to a Gold partner audience at Microsoft today. I’d like to think the user group record was due in part to the great agenda, but suspect it was really down to the draw of free beer, pizza and curry from LBi 😉 Instead of posting a simple link to the slides, I want to run through the info here because:

  1. I can attempt to convey some of the info which was in my demos here.
  2. I know how many of you will read a blog post but not follow a link to a PowerPoint deck!

First off, this presentation isn’t about discussing the standard (but critical) WCM topics which you can easily find quality information on these days. Personally I think if you’re embarking on a WCM project your starting point for information should be Andrew Connell’s book (and his blog articles if you don’t already subscribe), and reading in detail some of the key MSDN articles which have been published on the subject (I list my favorite resources at the end). In these places, you’ll find discussion on some of the sub-topics (non-exhaustive list) which I see as the bedrock of WCM:

  • Security
  • Accessibility
  • Optimization
  • Deployment

So instead of covering these in detail, my tips focus on key techniques I’ve found to be powerful in delivering successful WCM projects. They won’t be suitable for all WCM projects, but hopefully they give some food for thought and provide some extra value to the WCM info already out there.

Tip #1 – Implement HTML markup in user controls, not page layouts in SPD

Explanation:

Instead of adding your page layout HTML to the page layout in SPD, create a ‘parent’ user control for each layout which then contains the HTML and child user controls for this layout. This means you have a 1-to-1 mapping between page layouts in SPD to ‘parent’ user controls within your VS web project. These parent user controls contain the actual markup and also any child user controls used on your page.

Slide bullets:

  • Faster development experience – less SPD pain [instantaneous save rather than 3-5 second save plus occasional unwanted changes to markup]
  • Page layout markup now in primary source control [alongside your other code]
  • Much simpler deployment of updates [simply XCOPY user controls when you deploy updates to other environments]

What it looks like:

So we see that our page layouts now have very little markup (note the ASP ContentPlaceholder controls must stay at the top level, so our parent user control gets referenced in PlaceholderMain):

PageLayoutMarkup

..and then all of our real markup code is in the parent user control and child user controls which it references:

UserControlMarkup

Tip #2 – create custom IIS virtual directory pointing to your web project files

Explanation:

Most project teams I see put their user controls under 12/CONTROLTEMPLATES (often in a sub-folder) so as to follow what SharePoint does. This is mandatory in some techniques (e.g. custom field controls, delegate controls), but is not required for the type of WCM page user controls discussed in tip #1, and there are arguments for not storing those in the 12 hive. In summary, having a custom IIS virtual directory pointing to your VS project means we avoid having separate design-time and run-time locations for the project files.

Slide bullets:

  • Store user controls/page furniture files (e.g. image/XSL) here. Remove code files (e.g. .cs files) for non-dev environments
  • Faster development experience – no files to copy, no post-build events. Just save and F5!
  • Important if using tip #1 – don’t want to have to compile project [for post-build event to copy files] just for a HTML change

What it looks like:

In our case we created a virtual directory with an alias of ‘MIW’ (the name of our website) which points to our project files:

CustomIisVirtualDir

All our user control/page furniture file paths then look like ‘/MIW/PageLayoutsControls/foo.ascx’  etc.

Tip #3 – make life easier for the site authors/admins [reduce their stress and they’ll be on your side]

Explanation:

This one is a non-technical tip I wanted to throw in there – whilst we’re busy getting the front-end of the website right, I think it pays to also think about how authors/admins will use the back-end of the website (N.B. here I mean ‘business admin’ rather than IT pro). Although this is probably verging on the ‘political’ side of things, I’d advocate making their life as easy as possible – they often have a loud voice within the client organisation, and if they have bad feedback on what you’re building that’s a black mark against you.

Slide bullets:

  • Consider providing custom tools if the ‘SharePoint way’ is not simple enough (e.g. user management)
  • If you use custom lists for site data, provide a link for authors to find them (e.g. using CustomAction)
  • Remember these people are rarely SharePoint gurus!

What it looks like:

Clearly this will look different for every project, but for our client we created a custom Site Actions sub-menu with some key links:

CustomSiteActionsOptions

This sub-menu provides navigation to a couple of key lists used to power the site, but also to some custom screens we created for user management. Here, we also did some work to simplify things by wrapping up the relatively complex process of creating a user in the membership provider, adding them to various security groups across 2 sites (we had 2 ‘sister’ MOSS sites with single sign-on) and setting certain profile fields, into some simple screens which give a convenient summary of what just happened after creating a new user:

CustomCreateUser

Finally, the ‘edit profile’ screens used by business administrators were adapted from those used by end users, so that the admins became very familiar with each step of the ‘profile wizard’ and were better able to support their users.

Tip #4 – plan for unexpected errors

This is an interesting area, partly because we’re talking about that category of things which ‘should never happen’, but sometimes do. Having this conversation with a client (or non-technical management within your own organization) can be fun because the typical response is "whaddya mean the website going to go wrong?", but anyone familiar with software development principles knows there is no such thing as bug-free software. So the important thing is how do we handle these cases when they do occur.

There are several tips here, so I’ll break them down into 4.1, 4.2 and 4.3:

 Tip #4.1 – implement ‘friendly’ pages for 404s and unhandled errors

Explanation:

In brief, users of a public-facing website should never see a .Net error if something goes wrong on the website! If this ever does happen, the user is left with the feeling that your website is unreliable and can lose confidence in the organisation – think about it, do you ever remember seeing this kind of error on amazon.com/eBay.com/microsoft.com?

Slide bullets:

  • Typically use custom HTTP module to override SharePoint’s default error-handling behaviour, checking for:
    • HttpContext.Current.Server.GetLastError()
    • HttpContext.Current.Response.StatusCode = 404

What it looks like:

On the Standard Chartered site, our ‘friendly’ message looks like:

CustomErrorScreen

Tip #4.2 – implement e-mail notifications to developers for errors

Explanation:

Sorting the user experience is one thing, but what about actually fixing the source of the problem? A key element of this is being alerted whenever a user does experience a problem, rather than relying on them reporting it. When I first told the team we’d be implementing this, I was really thinking about the UAT phase, and also that critical week or two after go-live when you’ll occasionally discover some latent issue which had managed to hide itself throughout all the testing. However, what we found is that we got even more value from this in the earlier dev/testing phases. At LBi, the test team sit near the development team, so when the Outlook ‘toast’ popped up with an exception message which wasn’t ‘known’ by the team, I’d use the information in the e-mail to work out which tester triggered the error, then armed with the stack trace and other information, shout over and ask exactly what they had done to arrive at the problem. Much more efficient than waiting for a full bug report at the end of the day/week!

Slide bullets:

  • Means an error cannot happen without the team being aware
  • We built this for production, but was even more useful in dev/testing!
  • Implemented in same custom HTTP module as error page redirection

What it looks like:

ExceptionEmail

Tip #4.3 – implement proper tracing in your code

Explanation:

So being alerted about unhandled exceptions is one thing, but the stack trace and other details in the e-mail are often not enough information to actually fix the bug easily. This can be because we can’t see exactly what happened in the code (e.g. values of variables) leading up to the error. The only real way to obtain this information from production servers (or anywhere where we can’t use the debugger) is to add trace/log statements to your code, which when enabled will write out these details as the code executes. Since this has to be done manually, clearly the trade-off here is the time to implement this robustness. I would strongly recommend using a code productivity tool such as ReSharper (e.g. going beyond Visual Studio snippets) to drop in these statements quickly rather than relying on typing or copy/paste.

Slide bullets:

  • Provides ability to quickly locate bugs in your code
  • Trade off is time/effort to implement
  • Consider productivity tools such as ReSharper/CodeRush to lessen impact

What it looks like:

This is showing trace output with DebugView – notice I’ve configured DebugView to show me ‘warning’ trace statements in yellow and ‘error’ statements in red:

TracingError

Tip #5 – design for flexibility

Again, this one is broken down into two tips:

Tip #5.1 – using SharePoint lists for configuration data

Explanation:

As I said in my presentation abstract, since the only certainties in life are death, taxes and clients changing their minds, we should always aim to develop a solution which is flexible enough to accommodate some of the changes we may be asked to implement. We’re probably all used to the idea of storing site data which may change in SharePoint lists, but I like to extend this to ‘configuration’ information which dictates how the site behaves. The ‘Config Store’ framework I wrote for this can be found on Codeplex at www.codeplex.com/SPConfigStore – this provides the list, content type, caching and API to retrieve ‘config items’ in code.  So to take an example from our project, we had a switch for whether newly-created users need to change their password on first logon. Clearly this is something we need enabled in production, but can be a pain in our test environments where the client needs to create users and then get on with running specific test scripts. So, by having such a switch in a SharePoint list, we get the benefit that key configuration of the site can be changed as easily as changing a SharePoint list item, as opposed to perhaps being stored in web.config where I’d need to RDP onto the server, open a file, trigger an app pool recycle etc.

We stored 130+ configuration settings in our Config Store list, and of course, applied appropriate permissions so that unauthorized users could not access the list.

Slide bullets:

  • Use SP lists for values the client may wish to edit, but consider caching

What it looks like:

ConfigStore

Tip #5.2 – implement a custom master page class

Explanation:

Although SharePoint gets master pages from .Net 2.0, these really deal with implementing a consistent look and feel only. If you want consistent functionality or behaviour across your pages, a custom master page class can help here. To implement this, the key is to modify the class which your master page derives from in the @Master directive. We used this approach to implement code which needs to execute on every page load, and even if you don’t have this requirement from the start, I’d advocate using it so you have a convenient place to put such code if the need arises.  

Slide bullets:

  • Use for any code which should execute on every page load
  • Examples on our site:
    • Check if trial user/when trial access ends
    • Check if accepted Terms & Conditions
    • Check has supplied their initial user profile info
    • Enforce use of HTTPS
  • Can also use to expose properties for commonly checked items (e.g. username, logged in time)

What it looks like:

<%@ Master language="C#" Inherits="MyCompany.MyClient.MyProject.BasePage" %>

Conclusion/resources

So that’s it for my tips, all feedback/suggestions welcome! In terms of key resources, in addition to AC’s book here are some selected articles and so on I’d recommend looking into:

SharePoint dev strategies – it’s not all about Features!

Something I’ve been meaning to discuss for a long time is the decision to develop SharePoint artifacts using Features or some other approach. I actually discussed this back in May 2007 in my post SharePoint deployment options : Features or Content Deployment?, but feel it’s a topic worth revisiting/expanding on as I often see teams developing with Features without fully working out what exactly they are getting out of this approach. As you can guess by the article title, I’m not sold on the idea of using Features all the time (readers who have followed this blog from the start might find this surprising given I wrote many articles on how to work with Features) and want to put forward some points to consider when working out whether you need them or not.

Let’s first consider some (selected) characteristics of Features:

  • Provide a means of deploying SharePoint artifacts such as list templates/site columns/content types to multiple environments (e.g dev, test, production)
  • Currently the only way to deploy such artifacts across multiple site collections
  • Require some extra overhead to create, even with the community tools available (in comparison with creating artifacts directly in the SharePoint UI)
  • Little/no support for certain key updates (e.g. updating a content type which has already been deployed and is in use) – updates must be done through the user interface or the API, since modifying the original Feature files to make changes is unsupported.

Given these points, one scenario I really can’t see the benefit of Features for is when the solution consists of just one site collection – which is often the case for WCM sites. Why go through the extra hassle of packaging up artifacts into Features and be faced with difficulties managing updates when the artifacts will only ever exist in one site collection anyway? Sure, they may need to be deployed between environments but we have other ways of doing that.

N.B. The same applies to site definitions – why go to the trouble of creating a custom site definition when only one site will ever be created from it?

The alternative

If you aren’t forced into using Features to deal with multiple site collections, not using them could be the ‘most valid’ choice. In my recent WCM projects, I haven’t used Features for anything which doesn’t require a Feature (e.g. a VS workflow, a CustomAction etc.) for a long time now, including the project I discussed recently in SharePoint WCM in the finance sector and Developer lessons learnt – SharePoint WCM in the finance sector. Certainly given the extremely tight timescale on that project, I actually feel we could have failed to deliver on time if we had used Features.

Instead, my approach is to create a blank site in the dev environment, and do all the list/site column/content type/master page development there using the SharePoint UI and SPD. My next step (perhaps not surprising to regular readers) is to use my Content Deployment Wizard tool to move all the SharePoint artifacts to the other environments when ready. Equally, you could choose to write your own code which does the same thing using the now well-documented Content Deployment API. You’ll need to deal with any filesystem and .Net assets separately (generally before you import the SharePoint content on the destination), but in my view we’ve at least drastically simplified the SharePoint side of things. This seems to work well for many reasons:

  • More efficient since no development time lost to building Features
  • The update problem described earlier is taken care of for you (by the underlying Content Deployment API) – as an example, add a field to a content type in dev, deploy the content which uses it and the field will be added on the import site
  • Concept of a ‘package’ is maintained, so .cmp files produced by the Wizard can be handed to a hosting company for them to import using the Wizard at their end. I hear of quite a few people doing this.
  • We can store the .cmp files in source control and use them as part of a ‘Software Development Lifecycle’ approach. My approach (and I’d guess that of others using the tool in this way) is to store the .cmp file alongside the filesytem files such as .ascx files for the current ‘release’, and import them as part of the deployment process of moving the release to the next environment.

As an aside, when I decided to write a tool which would simplify dealing with dev/QA/UAT/production environments on SharePoint projects, I was initially torn between ‘solving the content type update problem’ and something based around the Content Deployment API. One reason why I decided on the latter was because the CD API already seemed to have solved the other issue!

Now I’m certainly not saying it works perfectly every time (it doesn’t, though is much improved following SP1 and infrastructure update), but in my experience I seem to spend less time over the course of a project resolving deployment issues than I would do building/troubleshooting Features. Additionally, using Content Deployment allows deployment of, well, content – if your solution relies on pre-created publishing pages or you have a scenario such as your client creating some content in UAT which needs to be moved to production before go-live, Features won’t help you here. The Content Deployment mechanism however, is designed for just that.

Where do Solutions (.wsp) fit in all this?

So to summarize the above, my rule of thumb for projects which aren’t built around multiple site collections is don’t use Features for things which don’t absolutely require them. So where does that leave Solution packages (.wsp files) – should they be abandoned too? Well no, definitely not in my view. Solutions solve a slightly different problem set:-

  • Deploying files to SharePoint web servers such that each server in a farm is a mirror of another. Ensuring all web front-ends have the same files used by SharePoint is, of course, a key requirement for SharePoint farms – this applies to Feature files when using them, but also to assemblies, 12 hive files etc.
  • Web.config modifications e.g. the ‘SafeControls’ entry required for custom web parts/controls
  • Code Access Security config modifications e.g. those required for controls not running from the GAC
  • Some other tasks, such as deployment of web part definition files (.webpart)

Really, there’s nothing stopping you from doing all this manually if you wanted to (especially if you’re always deploying to a single server, so there are less things to keep in sync). But the point here is that Solutions genuinely do make your life easier for comparatively little effort, so the ‘cost/benefit’ ratio is perhaps different to Features for me – the key is using one of the automated build approaches such as WSP Builder. So, my recommendation would generally be to always use Solutions for assemblies, 12 hive files etc., particularly in multiple server farm environments.

Conclusion

My rules of thumb then, are:

  • Consider not using Features (and site definitions) if your site isn’t based around multiple site collections – using the Wizard or some other solution based on Content Deployment can be the alternative
  • Use Solutions if you have multiple servers/environments, unless you’re happy to have more work to do to keep them in sync
  • If you are using Features, plan an approach for dealing with updates such as content type updates

My message here possibly goes against some of the guidance you might see other folks recommend, but I’m just going on the experience I’ve had delivering projects using different approaches. As always, the key is to consider deployment approach before you actually come to do it!

P.S. Also remember, deploying using backup and restore is a bad idea 😉

Developer lessons learnt – SharePoint WCM in the finance sector

So in my recent SharePoint WCM in the finance sector post, I talked about what we built and why I think the result is kind of interesting. What I want to do today is share some of the technical lessons learnt, and give a sense of what worked and what didn’t. As I mentioned last time, UK-based folks will hopefully be able to gain more than I can provide here when the site gets presented at the SharePoint UK user group, meaning we’ll answer any question you care to come up with, not just some of the developer stuff I want to discuss today.

Now to frame all this, it’s important to consider the type of project this was – the terms mean slightly different things to different people, but to me the emphasis was on ‘development’, as opposed to ‘implementation’ or ‘customization’. In code terms, we ended up with the following:

  • 17 Visual Studio projects in total
  • 4 Windows services
  • 5 nightly batch processes
  • 5 supplementary SQL tables (outside of the SharePoint db)

Not bad for 8 weeks work. As an aside, although the first number seems surprisingly high, in the technical washup I did with the team nobody thought this wasn’t the right way to factor the code. This is partly explained by the fact this single project is actually part of a bigger program of projects being done for the client, and also partly by the complexity of the Endeca search implementation and batch processes we needed.

What worked well (in no particular order)

  • Using a ‘development farm’ amongst the developers – this means the content database is shared, and thus no effort is required for one developer to see the lists, site columns, content types, master pages/layouts etc. created by others. This is actually the only way to do team development in SharePoint for me, but worthwhile mentioning it as I know not all SharePoint shops do things this way.
  • Proper use of tracing – this is the idea of writing log statements throughout code to easily diagnose problems once the code has been deployed to other environments (e.g. QA/UAT/production). We used the standard .Net System.Diagnostics trace framework with levels of Verbose, Info, Warning and Error – this has been familiar to me for a long time but a couple of the devs were new to it and agreed it was invaluable. In particular, we had a lot of library code and it’s often difficult to find logic bugs when you can’t directly see the result of something on a screen. For me, tracing essentially gives you the power to find certain bugs in seconds or minutes which could otherwise take hours to resolve. Although adding the tracing code can slow down coding, to mitigate this we used..
  • ReSharper at the start of the project I created several ReSharper templates to call our common code e.g. for tracing, and got all the team to download trial versions of ReSharper. This meant we could add trace statements in just a few keystrokes, meaning the ‘I didn’t have time to add trace!’ excuse couldn’t be used 🙂
  • My Config Store framework based on a SharePoint list * – we stored over 130 ‘configuration items’, from ‘True’/’False’ config switches such as ‘enforce password change for users first logon,’ to known URLs, to certain strings displayed throughout the site. We also found a couple of areas for improvement (e.g. field not big enough to store XML fragments!) which will hopefully make it into the next release.
  • Implementing logging/notifications for unhandled exceptions – I know the MS Enterprise Library a component for this, but we developed our own using a HTTP handler which sits in front of SharePoint’s SPRequest handler. This means that whenever something happens in the code which we’re not expecting, we get to find out about it immediately and can see the stack trace and other debug info in the e-mail. This was invaluable when the testers got to work, as it meant we could proactively deal with bugs before they even got reported. As soon as we noticed a mail with a new exception, we shout over to the particular test guy (identified by the user ID) "What exactly did you just do?" (which impressed them greatly!), so we nail the exact set of circumstances/data which caused the bug right there and then. 
  • My Content Deployment Wizard tool * – I also played the ‘deployment/release manager’ role on this project, so I was probably the guy who benefited from this the most but I’ve actually used it somewhere on every single project I’ve done since building it. For releases when the team had updated x page layouts, x lookup lists and x Config Store items, the tool is invaluable for picking out just the changed items and deploying them to the other environments. For Config Store items in particular it was useful as some config is different between environments (similar to web.config keys) so you don’t want to overwrite the entire list. For early releases when the team had made lots of of complex ‘schema’ updates (such as lots of intricate changes to site columns/lookups/content types), due to the time pressures I elected to take the ‘everything will definitely work this way’ route and drop the site collection on the target and import the whole thing (since no valuable data to preserve) so there are some complex deployment scenarios I still haven’t fully tested personally, but with 3 environments on top of the dev environment to deploy to the Wizard was prettes tilitiy much a lifesaver.
  • Cross-site lookup field by SharePoint Solutions this solves the problem that a lookup column can only lookup data in the current web. We use this for several key sets of data so we get to have one copy and one copy only. Damn useful.
  • LINQ to SQL we use this for CRUD operations in our supplementary SQL tables, and the guys who used it agreed they saved significant time over the standard approach of writing ADO.Net code.

* hopefully it doesn’t come across as shameless self-promotion to include these – the very reason I built them was to solve recurring problems I saw on SharePoint dev projects, and both utilities really did help us here.

Project challenges (in no particular order)

  • Team Foundation Server weirdness – for reasons we still haven’t established, we found the .csproj file for the web project (i.e. the most critical VS project!) would be checked out whenever a developer compiled the solution. With multiple checkout enabled, this means that pretty much every developer had the project file checked out all the time, regardless of whether he/she was making any project changes (e.g. adding new classes). This meant we had many more merge issues than normal – not fun.
  • VM issues – for a while we thought ReSharper was the culprit here, but a VS hotfix brought more stability. A hunch says at least some of the issues are 64-bit related (our dev environment was matched to production in this respect), since often the problem would manifest itself via Visual Studio (a 32-bit application remember). Frequent VS crashes, "attempted to read or write protected memory" messages in the event log – oh joy.
  • Failure to identify shared code soon enough – often a concern on complex development projects when the team is working at high speed. We did daily standup meetings (similar to scrum) but I suspect we may have focused too much on issues rather than what was being ‘successfully’ developed. So we lost some time to refactoring to bring things back in line, but this is why I like to think of the approach as ‘Dangerously Rapid Application Development’ (for those who remember the term ;-))
  • Issues arising from sharing IP addresses on SSL – in several of our environments, we attempted to use the technique documented by Adrian Spear in To setup SSL on multiple Sharepoint 2007 web applications using host headers under IIS 6.0. I’ve used this successfully in the past but had some problems this time round – despite working fine in our QA environment, we had problems in other places. After carefully analyzing the differences, I worked out that this technique will only work if the SSL certificate being used is a wildcard certificate or is matched on the machine name rather than the site URL. This might be obvious to other people but wasn’t to me!

Hope someone finds this useful!