Introducing EffectiveIoC

Last week I tweeted a few times about writing an IoC container in less than 60 lines of code.  I also blogged about how I thought the average IoC container was overly complex and didn’t promote DI-friendliness.

Well, EffectiveIoC is the result of that short spike.  The core ended up being about 60 lines of code (supported type mappings—including open and closed generics—and app.config mappings).  I felt a minimum viable IoC container needed a little more than that, so I’ve also included programmatic configuration and support for instances (effectively singletons).  I’ve also thrown in the ability to map an action to a type to do whatever you want when the type is resolved.  Without all the friendly API, it works out to be about 80-90 lines of code.


Well, the project page sums this up nicely.  For the most part, I wanted something that promoted DI-friendly design—which, from my point of view, is constructor injection.  So, EffectiveIoC is very simple.  It supports mapping one type to another (the from type must be assignable to the to type) and registering of instances by name (key).  Registering type mappings can be done in app.config:

or in code:

And type instances can be resolved like this:

Instances can also be registered.  In config this can be done like this:

Or in code, like this:

Instances can be resolved by name as follows:

For more information and to view the source, see the GitHub project site:

(function() { var po = document.createElement(‘script’); po.type = ‘text/javascript'; po.async = true; po.src = ‘'; var s = document.getElementsByTagName(‘script’)[0]; s.parentNode.insertBefore(po, s); })();

Azure Table Storage and the Great Certificate Expiry of 2013

I won’t get into too much detail about what happened; but on 22-Feb-2013, roughly at 8pm the certificates used for * expired.  The end result was that any application that used Azure Table Storage .NET API (or REST API and used the default certificate validation) began to fail connecting to Azure Table Storage.  More details can be found here.  At the time of this writing there hadn’t been anything published on any root cause analysis.

The way that SSL/TLS certificates work is that they provide a means where a 3rd party can validate an organization (i.e. a server with a given URL, or range of URLS).  That validation occurs by using the keys within the certificate to sign data from the server.  A client can then be assured that if a trusted 3rd party issued a cert for that specific URL and that cert was used to sign data from that URL, that the data *must* have come from a trusted server.  The validation occurs as part of a “trust chain”.  That chain includes things like checking for revocation of the certificate, the URL, the start date, the expiry date, etc.  The default action is the check the entire chain based on various policies—which includes checking to make sure the certificate hasn’t expired (based on the local time).

Now, one might argue that “expiry” of a certificate may not be that important.  That’s a specific decision for a specific client of said server.  I’m not going to suggest that ignoring the expiry is a good or a bad thing.  But, you’re well within your rights to come up with your own policy on the “validity” of a certificate from a specific server.  For example, you might ignore the expiry all together, or you may have a two-week grace period, etc. etc.

So, how would you do that? 

Fortunately, you can override the server certificate validation in .NET by setting the ServicePointManager.ServerCertificateValidationCallback property to some delegate that contains the policy code that you want to use.  For example, if you want to have a two week grace period after expiry, you could set the ServerCertificateValidationCallback like this:

Now, any subsequent calls into the Azure Table Storage API will invoke this callback and you can return true if the certificate is expired but still in the grace period.  E.g. the following code will invoke your callback:


Unfortunately, the existing mechanism (without doing SSL/TLS negotiation entirely yourself) of using ServicePointManager.ServerCertificateValidationCallback is a global setting, effectively changes the server certificate validation process of every-single TLS stream within a given AppDomain (HttpWebRequest, TlsStream, etc.).  This also means that any other code that feels like it can change the server certificate validation process.

So, what can you do about this?  Well, nothing to completely eliminate the race condition—ServicePointManager.ServerCertificateValidationCallback is simply designed wrong.  But, you can set ServerCertificateValidationCallback as close to the operation you want to perform.  But, this means doing that each for and every operation.  Seeing as how the Azure API make take some time before actually invoking a web request there’s a larger potential for race condition than we’d like.

An alternative is to invoke the REST API for Azure Table Storage and set ServerCertificateValidationCallback just before you invoke your web request.  This, of course, is a bit tedious considering there’s an existing .NET API for table storage.

Introducing RestCloudTable

I was interested in working with Azure REST APIs in general; so, I created a simpler .NET API that uses the REST API but also allows you to specify a validation callback that will set ServerCertificateValidationCallback immediately before invoking web requests.  This, of course, doesn’t fix the design issue with ServerCertificateValidationCallback but reduces the risk of race conditions as much as possible.

I’ve created a RestCloudTable project on GitHub:  Feel free to have a look and use it as is, if you like to avoid any potential future Azure Table Storage certificate expiry.

(function() { var po = document.createElement(‘script’); po.type = ‘text/javascript'; po.async = true; po.src = ‘'; var s = document.getElementsByTagName(‘script’)[0]; s.parentNode.insertBefore(po, s); })();

async/await Tips

There’s been some really good guidance about async/await in the past week or two.  I’ve been tinkering away at this post for a while now—based on presentations I’ve been doing, discussions I’ve had with folks at Microsoft, etc.  Now seems like a good idea to post it.

First, it’s important to understand what the "async" keyword really mean.  At face value async doesn’t make a method (anonymous or member) “asynchronous”—the body of the method does that.  What it does mean is that there’s a strong possibility that the body of the method won’t entirely be evaluated when the method returns to the caller.  i.e. it “might” be asynchronous.  What the compiler does is create a state machine that manages the various “awaits” that occur within an async method to manage the results and invoking continuations when results are available.  I’m not going to get into too much detail about the state machine, other than to say the entry to the method is now the creation of that state machine and the initialization of moving from state to state (much like the creation of an enumerable and moving from one element—the state—to the next).  The important part to remember here is that when an async method returns, there can be some code that will be evaluated in the future.

If you’ve ever done any work with HttpWebRequest and working with responses (e.g. disposal), you’ll appreciate being able to do this:


await is great to declare asynchronous operations in a sequential way.  This allows you to use other sequential syntax like using and try/catch to deal with common .NET axioms in the axiomatic way.  await, in my opinion, is really about allowing user interfaces to support asynchronous operations in an easy way with intuitive code. But, you can also use await to wait for parallel operations to complete.  For example, on a two core computer I can start up two tasks in parallel then await on both of them (one at a time) to complete:

If you run this code you should see the elapsed values (on a two or more core/cpu computer) will be very similar (not 1 second apart).  Contrast the subtle differences to:

While you can use await with parallel operations, the subtlety in the differences between sequential asynchronous operations can lead to incorrect code due to misunderstandings.  I suggest paying close attention to how you structure your code so it is in fact doing what you expect it to do.  In most cases, I simply recommend not doing anything “parallel” with await.

async void

The overwhelming recommendation is to avoid async methods that return void.  Caveat: the reason async void was made possible by the language teams was the fact that most event handlers return void; but it is sometimes useful for an event handler to be asynchronous (e.g. await another asynchronous method).  If you want to have a method that uses await but doesn’t return anything (e.g. would otherwise be void) you can simply change the void to Task.  e.g.:

This tells the compiler that the method doesn’t asynchronously return a value, but can now be awaited:


Main can’t be async. As we described above an async method can return with code that will be evaluated in the future Main returns, the application exits. If you *could* have an async Main, it would be similar to doing this:

This, depending on the platform, the hardware, and the current load, would mean that the Console.WriteLine *might* get executed.

Fortunately, this is easily fixed by creating a new method (that can be modified with async) then call it from Main.


One of the biggest advantages of async/await is the ability to write sequential code with multiple asynchronous operations.  Previously this required methods for each continuation (actual methods prior to .NET 2.0 and anonymous methods and lambdas in .NET 2.0 and  .NET 3.5).  Having code span multiple methods (whether they be anonymous or not) meant we couldn’t use axiomatic patterns like try/catch (not to mention using) very effectively—we’d have to check for exceptions in multiple places for the same reason.

There are some subtle ways exceptions can flow back from async methods, but fortunately the sequential nature of programming with await, you may not care.  But, with most things, it’ depends.  Most of the time exceptions are caught in the continuation.  This usually means on a thread different from the main (UI) thread.  So, you have to be careful what you do when you process the exception.  For example, given the following two methods.

And if we wrapped calls to each in try/catch:

In the first case (calling DoSomething1) the exception is caught on the same thread that called Start (i.e. before the await occurred).  *But*, in the second case (calling DoSomething2) the exception is not caught on the same thread as the caller.  So, if you wanted to present information via the UI then you’d have to check to see if you’re on the right thread to display information on the UI (i.e. marshal back to the UI thread, if needed).

Of course, any method can throw exceptions in the any of the places of the above two methods, so if you need to do something with thread affinity (like work with the UI) you’ll have to check to see if you need to marshal back to the UI thread (Control.BeginInvoke or Dispatcher.Invoke).

Unit testing

Unit testing asynchronous code can get a bit hairy.  For the most part, testing asynchronously is really just testing the compiler and runtime—not something that is recommended (i.e. it doesn’t buy you anything, it’s not your code).  So, for the most part, I recommend people test the units they intend to test.  e.g. test synchronous code.  For example, I could write an asynchronous method that calculates Pi as follows:

…which is fairly typical.  Asynchronous code is often the act of running something on a background thread/task.  I *could* then write a test for this that executes code like this:

But, what I really want to test is that Pi is calculated correctly, not that it occurred asynchronously. In certain circumstances something may *not* executed asynchronously anyway.  So, I generally recommend in cases like this the test actually be:

Of course, that may not always be possible.  You may only have an asynchronous way of invoking code, and if you can’t decompose into asynchronous and synchronous parts for testability then using await is likely the easiest option.  But, there’s some things to watch out for.  When writing a test for this asynchronous method you might intuitively write something like this:

But, the problem with this method is that the Assert may not occur before the test runner exits.  This method doesn’t tell the runner that it should wait for a result.  It’s effectively async void (another area not to use it).  This can easily be fixed by changing the return from void to Task:

A *very* subtle change; but this lets the runner know that the test method is “awaitable” and that it should wait for the Task to complete before exiting the runner.  Apparently many test runners recognize this and act accordingly so that your tests will actually run and your asynchronous code will be tested.

(function() { var po = document.createElement(‘script’); po.type = ‘text/javascript'; po.async = true; po.src = ‘'; var s = document.getElementsByTagName(‘script’)[0]; s.parentNode.insertBefore(po, s); })();

IDisposable and Class Hierarchies

In my previous post, I showed how the Dispose Pattern is effectively obsolete. But, there’s one area that I didn’t really cover.  What do you do when you want to create a class that implements IDisposable, doesn’t implement the Dispose Pattern, and will be derived from classes that will also implement disposal?

The Dispose Pattern covered this by coincidence.  Since something that derives from a class that implements the Dispose Pattern simply overrides the Dispose(bool) method, you effectively have a way to chain disposal from the sub to the base. There’s a lot of unrelated chaff that comes along with Dispose Pattern if that’s all you need.  What if you want to design a base class that implements IDisposable and support sub classes that might want to dispose of managed resources?  Well, you’re not screwed.

You can simply make your IDisposable.Dispose method virtual and a sub can override it before calling the base.  For example:

	public class Base : IDisposable
		private IDisposable managedResource;
		virtual public void Dispose()
			if(managedResource != null) managedResource.Dispose();
	public class Sub : Base
		private IDisposable managedResource;
		public override void Dispose()
			if (managedResource != null) managedResource.Dispose();

If you don’t implement a virtual Dispose and you don’t implement the Dispose Pattern, you should use the sealed modifier on your class because you’ve effectively made it impossible for base class to dispose of both their resources and the base’s resources in all circumstances.  In the case of a variable declared as the base class type that holds an instance of a subclassed type (e.g. Base base = new Sub()) only the base Dispose will get invoked (all other cases, the sub Dispose will get called).


If you do have a base class that implements IDisposable and doesn’t implement a virtual Dispose or implement the Dispose Pattern (e.g. outside of your control) then you’re basically screwed in terms of inheritance.  In this case, I would prefer composition over inheritance.  The type that would have been the base simply becomes a member of the new class and is treated just like any other disposable member (dealt with in the IDisposable.Dispose implementation).  For example:

	public class Base : IDisposable
		public void Dispose()
	public class Sub : IDisposable
		private Base theBase;
		public void Dispose()

This, of course, means you need to either mirror the interface that the previously-base-class provides, or provide a sub-set of wrapped functionality so the composed object can be used in the same ways it could have been had it been a base class.

This is why it’s important to design consciously—you need to understand the ramifications and side-effects of certain design choices.

The Dispose Pattern as an anti-pattern

When .NET first came out, the framework only had abstractions for what seemed like a handful of Windows features.  Developers were required to write their own abstractions around the Windows features that did not have abstractions.  Working with these features required you to work with unmanaged resources in many instances.  Unmanaged resources, as the name suggests, are not managed in any way by the .NET Framework.  If you don’t free those unmanaged resources when you’re done with them, they’ll leak.  Unmanaged resources need attention and they need it differently from managed resources.  Managed resources, by definition, are managed by the .NET Framework and their resources will be freed automatically a great proportion of the time when they’re no longer in use.  The Garbage Collector (GC) knows (or is “told”) what objects are in use and what objects are not in use.

The GC frees managed resources when it gets its timeslice(s) to tidy up memory—which will be some time *after* the resource stop being used.  The IDisposable interface was created so that managed resources can be deterministically freed.  I say “managed resources” because interfaces can do nothing with destructors and thus the interface inherently can’t do anything specifically to help with unmanaged resources.

“Unmanaged resources” generally means dealing with a handle and freeing that handle when no longer in use.  “Support” for Windows features in .NET abstractions generally involved freeing those handles when not in use.  Much like managed resources, to deterministically free them you had to implement IDisposable and free them in the call to Dispose.  The problem with this was if you forgot to wrap the object in a using block or otherwise didn’t call Dispose.  The managed resources would be detected as being unused (unreferenced) and be freed automatically at the next collection, unmanaged resources would not.  Unmanaged resources would leak and could cause potential issues with Windows in various ways (handles are a finite resource, for one, so an application could “run out”).  So, those unmanaged resources must be freed during finalization of the object (the automatic cleanup of the object during collection by the GC) had they not already been freed during dispose.  Since finalization and Dispose are intrinsically linked, the Dispose Pattern was created to make this process easier and consistent.

I won’t get into much detail about the Dispose Pattern, but what this means is that to implement the Dispose Pattern, you must implement a destructor that calls Dispose(bool) with a false argument.  Destructors that do no work force an entry to be made in the finalize queue for each instance of that type.  This forces the type to use its memory until the GC has a chance to collect and run finalizers. This impacts performance (needless finalization) as well as adds stress to the garbage collector (extra work, more things to keep track of, extra resources, etc.). [1] If you have no unmanaged resources to free, you have no reason to have a destructor and thus have no reason to implement the Dispose Pattern.  Some might say it’s handy “just in case”; but those cases are really rare.

.NET has evolved quite a bit from version 1.x, it how has rich support for many of the Windows features that people need to be able to use.  Most of the type handles are hidden in these feature abstractions and the developer doesn’t need to do anything special other than recognize a type implements IDisposable and deterministically call Dispose in some way.  Of the features that didn’t have abstractions, lower-level abstractions like SafeHandle (which SafeHandleZeroOrMinusOneIsInvalid and SafeHandleMinuesOneIsInvalid etc. derive from)—which implement IDisposable and makes every native handle a “managed resource”—means there is very little reason to write a destructor.

The most recent perpetuation of the anti-pattern is in a Resharper extension called R2P (refactoring to patterns).  Let’s analyze the example R2P IDisposable code:

As we can see from this code, the Dispose pattern has been implemented and a destructor with a Dispose(false).  If we look at Dispose(bool), Dispose(bool) does nothing if a false argument is passed to it.  So, effectively we could simply remove Dispose(false) and get the same result.  This also means we could completely remove the destructor.  Now we’re left with Dispose(true) in Dispose() and Dispose(bool).  Since Dispose(bool) is now only ever called with a true argument, there’s no reason to have this method.  We can take the contents of the if(disposing) block, move it to Dispose (replacing the Dispose(true)) and have exactly the same result as before without the Dispose Pattern.  Except now, we’re reduced the stress on the GC *and* we’ve made our code much less complex.  Also, since we no longer have a destructor there will be no finalizer, so there’s no need to call SuppressFinalizeNot implementing the Dispose Pattern would result in better code in this case:

	public class Person : IDisposable
		public void Dispose()
		public Bitmap Photo { get; set; }

Of course, when you’re deriving from a class that implements the Dispose Pattern and your class needs to dispose of managed resources, then you need to make use of Dispose(bool).  For example:

	public class FantasicalControl : System.Windows.Forms.Control
		protected override void Dispose(bool disposing)
		public Bitmap Photo { get; set; }


Patterns are great, they help document code by providing consistent terminology and recognizable implementation (code).  But, when they’re not used in the right place at the right time, they make code confusing and harder to understand and become Anti-Patterns. 


Introduction to Productivity Extensions

The .NET Framework has been around since 2002. There are many common classes and methods that have been around a long time. The Framework and the languages used to develop on it have evolved quite a bit since many of these classes and their methods came into existence. Existing classes and methods in the base class library (BCL) could be kept up to date with these technologies, but it’s time consuming and potentially destabilizing to add or change methods after a library has been released and Microsoft generally avoids this unless there’s a really good reason.

Generics, for example, came along in the .Net 2.0 timeframe; so, many existing Framework subsystems never had the benefit of generics to make certain methods more strongly-typed. Many methods in the Framework take a Type parameter and return anObject of that Type but must be first cast in order for the object to be used as its requested type.Attribute.GetCustomAttribute(Assembly, Type) gets an Attribute-based class that has been added at the assembly level. For example, to get the copyright information of an assembly, you might do something like:

var aca = (AssemblyCopyrightAttribute)Attribute.GetCustomAttribute(Assembly.GetExecutingAssembly(),
    typeof (AssemblyCopyrightAttribute));

Involving an Assembly instance, the Attribute class, the typeof operator, and a cast.

Another feature added after many of the existing APIs were released was anonymous methods.  Anonymous methods will capture outer variables to extend their lifetime so they will be available when the anonymous method is executed (presumably asynchronously to the code where the capture occurred).  There are many existing APIs that make the assumption that state can’t be captured and it must be managed and passed in explicitly by the caller. 

For example:

  byte[] buffer = new byte[1024];
    fileStream.BeginRead(buffer, 0, buffer.Length, ReadCompleted, fileStream);

private static void ReadCompleted(IAsyncResult ar)
  FileStream fileStream = (FileStream) ar.AsyncState;

In this example we’re re-using the stream (fileStream) for our state and passing as the state object in the last argument to BeginRead.

With anonymous methods, passing this state in often became unnecessary as the compiler would generate a state machine to manage any variables used within the anonymous method that were declared outside of the anonymous method. For example:

fileStream.BeginRead(buffer, 0, buffer.Length, 
  delegate(IAsyncResult ar) { fileStream.EndRead(ar); },

Or, if you prefer the more recent lambda syntax:

fileStream.BeginRead(buffer, 0, buffer.Length,
                    ar => fileStream.EndRead(ar),

The compiler generates a state machine that captures fileStream so we don’t have to.  But, since we’re using methods designed prior to out variable capturing, we have to send null as the last parameter to tell the method we don’t have any state that it needs to pass along.

Microsoft has a policy of not changing shipped assemblies unless they have to (i.e. bug fixes).  This means that just because Generics or anonymous methods were released, they weren’t going to go through all the existing classes/methods in already-shipped assemblies and add Generics support or APIs optimized for anonymous methods.  Unfortunately, this means many older APIs are harder to use then they need to be.

Enter Productivity Extensions.  When extension methods came along, I would create extension methods to “wrap” some of these methods in a way that was more convenient with current syntax or features.  As a result I had various extension methods lying around that did various things.  I decided to collect all those (and others), look at patterns and create a more comprehensive and centralized collection of extension methods—which I’m calling the Productivity Extensions.

One of those patterns is the Asynchronous Programming Model (APM) and the Begin* methods and their use of the state parameter.  Productivity Extensions provide a variety of overrides that simply leave this parameter off and call the original method with null.  For example:

fileStream.BeginRead(buffer, 0, buffer.Length,
                    ar => fileStream.EndRead(ar));

In addition, overrides are provided to simply assume offset of 0 and a length that matches the array length.  So, using Productivity Extensions you could re-write our original call to BeginRead as:

fileStream.BeginRead(buffer, ar => fileStream.EndRead(ar));

Productivity Extensions also include various extensions to make using older APIs that accept a Type argument and return an Object like Attribute.GetCustomAttribute to make use of Generics.  For example:

var aca = Assembly.GetExecutingAssembly().GetCustomAttribute<AssemblyCopyrightAttribute>();

There’s many other instances of these two patterns as well as many other extensions.  There’s currently 650 methods extending over 400 classes in the .NET Framework.  This is completely open source at and available on NuGet (the ID is “ProductivityExtensions”) with more information at

I encourage you have a look and if you have any questions, drop me a line, add an issue on GitHub or add suggestions/issues on UserVoice at

Leave predicting to meteorologists and fortune-tellers

There’s a couple of good axioms about software design: You Can’t Future-Proof Solutions and the Ivory Tower Architect

You Can’t Future-Proof Solutions basically details the fact that you can’t predict the future.  You can’t possibly come up with a solution that is “future-proof” without being able to know exactly what will happen in the future.  If you could do that, you shouldn’t be writing software, you should be playing the stock market.

Ivory Tower Architect is a software development archetype whose attributes are that they are disconnected from the people and users their architecture is supposed to serve.  They don’t know their users because they don’t interact with them and they don’t observe them.  The Ivory Tower Architect’s decisions are based on theory, are academic or esoteric.  Ivory Tower Architects effectively predict what users will want and what will work.

Prediction is a form of guessing.  At the worst case (fortune tellers) this prediction is actively fraudulent—meant to tell someone something they want to hear to promote their own gain.  At the best case it’s based on past experience and education and is actually turns out true.  Yes, prediction is sometimes right.  But, you don’t want to base anything very important on predictions. 

Software is a very important aspect of a business.  It takes, time, resources, and money to produce and its success is often gauged by revenue.  Putting time, resources and money into a “guess” is highly risky.  If that guess isn’t accurate, in terms of software, what is produced is technical debt.  If predictions are false the software will not be as useful as needed and will severely impact revenue or cost effectiveness. 

How do you avoid predictions?  Communicate!  In terms of the ivory tower architect, they shouldn’t work in isolation.  They should at least work with their team.  They should also understand and converse with their customers. 

All the important information is outside of the organization’s place of business.  You need to understand specific problems and success criteria before you can provide a solution that will work.

(function() { var po = document.createElement(‘script’); po.type = ‘text/javascript'; po.async = true; po.src = ‘'; var s = document.getElementsByTagName(‘script’)[0]; s.parentNode.insertBefore(po, s); })();

And the winners are…

Kevin Davis and by David Williams.

Please send me an email (via link at left) so I can send you details.

(function() { var po = document.createElement(‘script’); po.type = ‘text/javascript'; po.async = true; po.src = ‘'; var s = document.getElementsByTagName(‘script’)[0]; s.parentNode.insertBefore(po, s); })();

Developer Fitness

No, this isn’t something about Fitnesse, it’s really about physical fitness.  Caveat: I’m not a doctor.

Another conference under my belt: //Build/.  There seems to be a trend of private discussions at conferences (maybe it’s just me) about the sizes of t-shirts at developer conferences and how the average size is, well, above average.

There seemed to be a few conversations about fitness as well, at least in the context of losing weight.  Let’s be fair, being a developer is not kind to the body.  We sit around, usually inside (in the dark) staring at a computer screen (or screens).  Over-and-above the radiation aspect of this scenario, this means we’re largely sedentary as we perform our jobs.  Not a good thing.

I’m not big on excuses, yes, our job is sedentary; yes, it doesn’t involve much (if any) physical labour…  But, that’s not an excuse to have a complete lack of exercise in our lives.  I’ve struggled with my weight for years and I came to the conclusion a while back that I didn’t want to be overweight anymore.  I thought it would be useful for me to blog about what I’ve learned over the years.

Losing Weight

First off, the impetus for better fitness and better health is almost always about losing weight.  That will be the focus of this post.  If you don’t need/want to lose weight, this might be a bit boring.

Second: diets don’t work.  And by “work” I mean get you to and maintain a healthy weight level.  Yes, a diet will allow you to lose weight for a short period of time—that’s it.  Some diets aren’t even healthy.  I’m not going to mention diets (other than the previous sentence).  If you want to lose weight in the long-term you need to make a lifestyle change—even if it’s just a small amount of weight (in today’s society, a small amount is probably in the range of 25-50 pounds).  If a diet cannot be sustained for the rest of your life and still keep you alive, then it’s a “fad” diet and avoid it.

I’m not talking about each of us becoming a bodybuilder or a fitness model; let’s get that out of the way: that’s not going to happen, that takes a level of commitment that would interfere with your job (i.e. it would become your full-time job).  But, we can be more healthy and get to a healthier weight and feel better about ourselves.

Changing Lifestyle

Yes, this means doing things differently in our lives.  Does this mean completely stopping eating certain things?  Not necessarily.  You may have other impetus’ to “stop” certain actions (if you’re diagnosed with high cholesterol cutting out certain foods might be a must); but in general a lifestyle change generally means healthy ratios.  Pizza for example, you can still eat it; just not 3 times a day.

Down deep in our hearts we really know how to lose weight and keep it off—we just don’t want to admit that we have to reduce certain things and increase others.  We really know that a healthy weight means a certain caloric intake—usually levels lower than where we are, but we just don’t want to admit it.  We’d love it if we could cheat with a diet or pills or hypnosis or surgery or device.  Some people have had “success” at these things; but, “results not typical” is generally somewhere to be found.

Changing lifestyle can be hard.  I’ve found some various tips and tricks to helping that I’ll outline.

Eat more often

This simple way of dealing with eating makes overeating and binging less of a problem.  The theory is if you eat 5 meals a day, but make those meals smaller, that your body will think you’re actually eating more.  When you *do* eat, you won’t be as hungry and you won’t feel the need to eat as much.  The theory is that this “stokes the fire” of your system and avoids periods of time as long as 4 hours between meals can trigger your body to store fat.  Keep in mind, we’re ancient devices that had to survive in situations where we didn’t have food for extended periods of time.  It made sense to eat like a mad person for 3 months and store a lot of fat for the next 3-6 months where food may be scarce.  Face it, we’ve created an environment that is counter to our metabolism. 

Basically, you’d still have 3 squares, but you’d also include two “snacks”.  I remember when I started doing this; it felt like I was eating all the time and eating way to much.  I ate less in the long run.  Take the calories you would have eaten in the “3 squares” and spread them out to a couple of snacks, one after breakfast, and one after lunch.  Once you get in a habit of doing this you’ll feel less hungry during the day and less likely to binge eat.  It generally takes you and your body 6 weeks to get used to things.  If you try something new, try it for at least 6 weeks before making a decision (unless of course you have sudden and sever side effects).  Also remember that your snacks should be balanced in macro nutrients for them to be as effective as they can be.

Macro nutrients

Every single eating style pays close attention to the three macro nutrients:  they are Fats, Carbohydrates, and Proteins.  Our body needs each of these macro nutrients to survive.  Most foods have each of these macro nutrients.  Tenderloin is high in protein so you need to eat carbs.  Bacon is high in protein and fat; so you have to eat carbs.  Broccoli is high in carbs so you have to eat it with protein, etc…  Most of the “diet plans” really just have a unique macro nutrient ratio.  USDA (at one time, which might still be true; but they revise that periodically) recommends 18:29:53  (protein:fat:carbohydrate % calories), Atkins is generally :65: Zone is 30:40:30, etc.  I like the macronutrient ratio plans because you can eat anything you want as long as you can apply the ratio.  i.e. it works while being vegan or vegetarian.  There’s other plans like Paleo that approach nutrition more around the fact that we’ve evolved from a point where we didn’t have all the manufactured, engineered and contrived food and focuses on “natural” stuff (although, not vegetarian :)  It’s important not to focus one one macro nutrient and it’s important not to cut out a particular macro nutrient.  e.g. cutting out fat, while sounding good (“fat” is the same word as in “bodyfat”) but could lead to malnutrition.  For example, vitamins like A, D, E and K are *fat soluble* which mean fat needs to be present for them to be absorbed.  If you don’t get enough fat you can end up not absorbing enough A, D, E, or K and lead to health issues.  It’s generally the choice of fat that makes a difference in health/weight-loss.  Yeah, you could have fries with your A, D, E, and K foods (or supplements), but that’s not the *good* fats.  Maybe some guacamole would be better.

But, no matter hat you read or what you choose, “it depends”.  There’s more to healthy eating that just a magic ratio; metabolism, genetics, etc. play a part.  I can’t stress enough, you need to find something that works for *you*—one of these plans might be right; but don’t assume they’re all right.


No, I’m not talking about roids or some funky anabolic-raising concoction.  I’m talking about things that aren’t food.  Vitamins and minerals is generally what I’m talking about—something that supplements your diet.  It’s hard and emotionally unhealthy for the average person to eat exactly the same thing day after day and get the perfect vitamin and mineral intact (whatever that is)—we need some variety in our lives.  So, it’s hard to make sure we’re eating everything we need to to get the nutrients our body needs every day.  I’ve been supplementing for years, well before doctors and nutritional committees/ministries started accepting it.  Yes, as Sheldon says “it makes expensive pee”.  That is true, but, it also means our body has access to the nutrients it needs to function properly and not do the things it does when it thinks it’s malnourished (like storing fat, spiking blood sugar levels, etc.).  This is an area to be careful about.  Many vitamins need *huge* quantities to be toxic, and some don’t; but some are contraindicated for certain people.  Ginseng, for example—this *isn’t* generally a good thing for people with heart problems.

Other than hyper-dosing on Vitamin C (which still might be bad if you have ulcers), or simply taking a multivitamin as directed, you should talk to a health professionally before drastically changing supplements.


(aka “Fiber” for my US friends).  While there’s a handful of nutritional plans that take fibre into account, I believe it’s tantamount to the fourth macronutrient.  Some nutritional plans allow you to eat more of the other macro nutrients when more fibre is eat at the same time.  e.g. whole-wheat and white bread are roughly the same in calories; but most diets recommend whole-wheat over white—which is partially because of the extra fibre (some of it also has to do with the different ways your body metabolises each: white metabolises into glycogen faster—which can be stored as fat easier).  Generally, the more fibre something has, the better it is for you.  It’s useful to know things that are high in fibre when you’re eating out so you can make better decisions.

Things to cut out

Okay, I lied, there’s a few things I would recommend not eating at all.  I don’t drink soda any more.  There’s really nothing of any nutritional benefit to any soda beverage—especially sugar free.  Sugar free, in my mind, is one of the worst things to drink.  There’s studies that suggest it tricks the body into thinking it’s eating something sugary and triggers it to store fat.  Even if you don’t believe these studies, there’s still nothing beneficial to soda—I generally stick to water when I’m thirsty.  Cutting out a single can of soda a day will reduce your calorie intake by up to 50 thousand calories!  That’s the equivalent of 35 meals in a year, or almost 11 full days of eating.  If you’re currently drinking a cola a day, that’s one really easy thing to do to help lose weight.  Another is salt. I don’t cut out foods with *any* amount of salt in them; but avoid really salty foods and don’t add salt to meals.  It’s not healthy for the heart and leads to water retention (we’re hoping to look better not “bloated”, right?).  If you reduce table salt drastically, make sure you don’t run the risk of getting a goiter (amongst other things) from the reduced iodine (which can be countered through eating the right kinds of fish).

Let your body help you

Muscle takes more calories to maintain that fat; the more muscle you have in your body the more calories are required at rest.  This is useful because if you increase your quantity of muscle and maintain the same level of caloric intake then it’s the same as reducing calories.  Many people recommend bodybuilding as a means to lose weight.  You get an increased level of exercise (some of it cardio) while increasing your muscle mass in order to more easily sustain a health body weight.  This generally means compensating with a higher consumption of protein.  But, it’s not for everyone—if you have heart issues then it may not be a good idea.  If you think that’s something you’d want to try, check with your doctor first—just to be on the safe side.  If found it really hard to start and maintain a pace by which to increase muscle mass on my own.  I hired a training a couple of years ago to jump start that.  I already knew most of the techniques and theory; but, to be on a schedule and be there for someone else (or still pay them) was excellent motivation for people to get going and to maintain a healthy pace.  It’s helpful, if you don’t have a gym buddy, to have a trainer around to spot you to avoid injury.

Despite what you choose for activity, I believe in “balance”.  If you want to concentrate on increase muscle mass, you should still do some cardio.  it’s good for your heart, helps with endurance, and introduces a changing in pace that can help break up the doldrums of the same type of workout 3-4 times a week.

It’s not just about X

Where x: fitness, nutrition.  Simply changing your eating habits alone isn’t likely to make a huge positive impact on your health.  Yes, you could eat much less or eat much differently and your weight may change (I’ve seen people gain weight when they start eating “healthy”…) but this tactic alone to lose weight can lead to health problems (i.e. “diets” don’t work).  Same goes for fitness, if you simply start working out, running, jogging, cardio, etc. and don’t change your eating habits you run the risk of the same problems with health.  Your body is not in need of different nutrients to sustain the work you’re making it do and you could run into health problems from lack of appropriate nutrients.  I’m a big proponent of a well-rounded lifestyle (not only in terms of fitness, but that’s another blog post :).  I believe in both health food consumption, but also an active lifestyle.  What activity you want to perform can also mean eating differently, possibly on a daily basis.  The variables are endless and your metabolism affects how you should eat/exercise; I recommend some thorough research on this if you want to get really efficient at it.


Losing weight is goal-oriented.  The final goal is, of course, to have a smaller t-shirt size; but for some of us that’s a long-term goal.  It’s difficult to maintain something without seeing “instant” feedback.  “Cheating” is a common method of maintaining a healthy lifestyle with short-term goals.  As I mentioned earlier you don’t have to cut out certain foods; but you can use them as motivation.  For example, pizza.  Sure, don’t have it once a day; but if it’s your krypronite (like me) have it once a week if you meet your other goals.

Health v Mood

It’s easy to eat certain foods because you’re in a certain mood.  We tend to resort to comfort foods when, well, we need comfort—when we’re not feeling good about ourselves or something earth-shattering has occurred in our lives.  It’s important to be cognizant of what we eat.  Food is a drug that affects us beyond mood—we need to use that drug properly and not abuse it.  If you’re in a bad mood, try to pay more attention to what you eat.

Watch what you eat

Healthy eating really gets down to simply knowing what we eat.  Simply knowing that in-the-large, a can of cola a day is the equivalent of 35 meals a day in calories, we can better make decisions in-the-small and maybe choose water over cola.  Choosing to limit soda is a a fairly easy decision to make; deciding what to eat, the quantities to eat and the ratio of macro nutrients to consume can get a little daunting.  Some of the simple decisions that I make throughout the day: whole wheat over white, high-protein, low-fat, avoid starchy carbs, avoid sugary beverages, don’t add fats, etc.  A few simple mantras like this can make your food choices much easier from day to day.  Also, each person is different.  There’s different body types (mesomorph, ectomorph, endomorph) and different genetic backgrounds that can affect your how your body metabolises food.  e.g. certain genetic backgrounds did not have milk in their diet so haven’t evolved to tolerate it in their diet—if you’re this type of person milk-based protein supplements might not be a good idea.  But, what I’m really trying to say is that you need to spend a bit of time through trial and error to figure out what works for you before you can really find a lifestyle that not only works for you, but you’re comfortable with.

What I like about any particular nutrition plan (Zone, Paleo, veganism, vegetarian, etc.) is that it makes you think about what you eat.  I recommend finding one that works and sticking with it.  And yes, that could mean veganism.  (although it is harder to maintain).  It’s important to pick one you know you can be consistent with.  “falling off the wagon” to many times can lead to disappointment, stress, and gaining even more weight. 


Sleep is important for your health in general; but also for you waistline.  Many studies show that getting a good night sleep helps tremendously with attaining a healthy weight as well as maintaining a healthy weight.  Poor sleeping habits can lead to stress, which can lead to increased cortisol levels which leads to changes in insulin levels which can lead to your body storing fat.  There’s been a few studies out there that suggest it’s healthier to wake up early and go to bed early.  I think that generally puts you in sync with the dusk and dawn and maximizes your sun exposure leading a better mood and less stress.  But, I find it hard to do… (did I mention, I don’t think its “just about X”? :)


I bring this up not because it’s a very common acquired disease or that more than few friends and family have it.  I bring it up because I think what someone with Type 1 or Type 2 diabetes has to deal with in a day can bring much benefit to the average person.  Diabetics have to constantly deal with blood sugar levels and counter-act spikes and troughs through the manual introduction of insulin.  A non-diabetic person generally has a metabolism that monitors and deals with that automatically.  But, that doesn’t mean the spikes in blood sugar and huge insulin production changes are *good* for people.  If you maintain a healthy blood sugar level through the day and don’t cause your body to spike insulin production levels your body will be under less stress (cortisol) and not be in situations where it wants to store fat rather than burn energy.  (One of the reasons I’ve cut out sodas…)


Kind of a brain dump to be sure, and if there’s enough interest I can go deeper in to each section…  But, take on the goal of reducing a conference t-shirt size in the next 6 months or the next conference you hope to attend!  Post back (or send me an email) on your progress.  I’d love to see our community and industry be much more healthy—I want to be able to spend more time with you people, not less.

(function() { var po = document.createElement(‘script’); po.type = ‘text/javascript'; po.async = true; po.src = ‘'; var s = document.getElementsByTagName(‘script’)[0]; s.parentNode.insertBefore(po, s); })();

Win a free copy of Visual Studio 2010 Best Practices

Win A free copy of the ‘Visual Studio 2010 Best Practices’, just by commenting!

We’re giving away two ebook editions of Visual Studio 2010 Best Practices.

All you have to do to win is comment on why you think you should win a copy of the book.

I’ll pick a winner from the most creative answers in two weeks.

(function() { var po = document.createElement(‘script’); po.type = ‘text/javascript'; po.async = true; po.src = ‘'; var s = document.getElementsByTagName(‘script’)[0]; s.parentNode.insertBefore(po, s); })();

Just another Microsoft MVPs site