Jul 05

Today we’re only going to talk about the volatile keyword. The volatile keyword can be used on the declaration of a field, transforming it into a volatile field. Currently, you can only annotate a field with this keyword if it is:

  • a reference type;
  • a pointer type (unsafe code);
  • one of the following types: sbyte, byte, short, ushort, int, uint, char, float or bool;
  • an enum with a base type of byte, sbyte, short,ushort,int or uint.

As we’ve seen, volatiles ensures that proper fences are applied when someone access that field (ie, reading means having an acquire fence and writing ends up injecting a release fence). As you know by now, load and store reordering can happen at several levels and you might be wondering if using the volatile is enough for ensuring that fences are applied on all levels. Fortunately, the answer is yes, and the volatile keyword  is respected by the compiler and by the processor.

Ok, so when should you use this keyword? Well, probably an example is in order, right? Lets take a look at the following code which shows the code I’ve written in the past for lazy loading:

class Lazy {
  private SomeObject _object;
  private Object _locker = new Object();
  public SomeObject SomeObject {
    get {
      if (_object == null) {
        lock (_locker) {
          if (_object == null) {
            _object = new SomeObject();
      return _object;

What is your opinion? Do you see anything wrong (btw, suppose SomeObject is a reference type with some properties). I’ll return tomorrow and we’ll come back to this discussion. Keep tuned!

3 comments so far

  1. Ivan Kotev
    4:02 pm - 7-6-2009

    Well, there is one subtle problem 🙂 The problem is with the if-statement “if (_object == null)”. As we know for performance reasons CPUs can store frequently accessible objects in the CPU cache. If we run on a multiprocessor machine and the OS decides to execute our threads on different processors we might come into a situation where Thread1 writes are not visible by Thread2. Maybe the value is still in the CPU1 cache or CPU2 hits the cache (old value) (As far as I know this is true only for the IA64 which has a weaker memory model and allows load semantics to appear out of order. x86 and x64 have CPU cache coherency which means that any change in the CPU1 cache will be synchronized to CPU2 cache??).

    To solve the problem we need to read from the virtual memory and not from the cache, so we might use:

    1) private volatile SomeObject _object;
    2)Thread.VolatileRead(ref _object)
    3)Thread.MemoryBarrier before “if (_object == null)”

    Also when _object is null we create new SomeObject and synchronize access to it with a lock (Monitor). When Monitor enters it uses a Read memory barrier and when it exits – Write memory barrier, so no cache hit => no problem.

    What do you think?,guid,543d89ad-8d57-4a51-b7c9-a821e3992bf6.aspx

  2. Luis Abreu
    6:15 pm - 7-6-2009

    Yep, according to my studies (and I”m no expert), IA64 have pretty weak memory models, allowing several “problematic” reorderings. and you”re right regarding the problem and its solution.

  3. sudhee
    10:01 am - 11-4-2010

    To learn more abt Multithreading in C#(Synchronization primitives,Do’s and Dont’s etc)

    Click here :

    Your comment is awaiting moderation.