Executing LabVIEW VIs through C style function pointers via .NET

Problems exist to be solved.


Some problems are so complex that you cannot solve them when you need to. With other problems it is not a matter of complexity, but a matter of not having all the pieces of the puzzle.


Some problems are so interesting that it pays to keep them in the back of your head, just in case you ever find that missing piece that allows you to solve them.


One of those problems is that with the current LabVIEW external library interface it is impossible to execute LabVIEW VIs in a DLL through a C style function pointer.


This feature does not exist, even though lots of programmers would really benefit from it.


2 weeks ago I was active on the Visual C++ newsgroup as usual, when learned –by pure coincidence- about an advanced function in the .NET framework called Marshal.GetFunctionPointerForDelegate.


I immediately realized that this was the key to solving the old C style function pointer issue


Problem description


Suppose that we have a DLL that exports a function. This function takes a function pointer as an argument, to be executed sometime later to pass some data to the calling application.


The Test DLL code


The idea is to test the ability to run a LabVIEW VI through a function pointer. To test this we need a DLL which will test this for us.


The following very simple test function has been created for this purpose.


This is the expected prototype of the callback function


typedef void (__stdcall *fSimpleCallBack)(int i);


And this is the DLL function that will perform our test:


//Test of the simple callback


void Simplefoo(int i, fSimpleCallBack cb)


{


  (cb)(i);  //invoke the callback function.


}


As you can see, the LabVIEW VI will be executed with the supplied parameter through the supplied function pointer.


Principal solution


The very first action has to be creating a .NET delegate of which the signature matches the signature of the CallBack function that is required.


Then a proxy class is created with only 2 members: an event that is based on the new delegate function, and a method called ‘GetDelegate’.


LabVIEW has the ability to link a LabVIEW VI with a .NET event. The VI prototype is generated automatically, and matches the event delegate signature.


After registering a VI for the specified event, LabVIEW uses the GetDelegate method of our proxy class to extract the registered delegate from the event. That delegate is then converted to a function pointer via the ‘GetFunctionPointerForDelegate’ method of the Marshal class.


That function pointer can then be supplied to a native function that can execute the LabVIEW VI directly through the function pointer without requiring any modification at all.


A note on GetFunctionPointerForDelegate


The documentation for this method is sparse. It seems that VS2005 and .NET2.0 were rushed out of the door without taking the time to fully document the more advanced functions and providing code samples in the documentation.


Because of this, it is easy to miss that GetFunctionPointerForDelegate returns a pointer that uses the __stdcall calling convention by default.


It is also easy to overlook the fact that you can change this.


If you need a function pointer that uses another calling convention – e.g. __cdecl – then you can achieve this by applying the UnmanagedFunctionPointerAttribute attribute to the delegate definition.


The .NET code


namespace FunctionPointerProxy


{


  public class SimpleProxy


  {


    //Marshal a simple integer parameter as 32 bit integer.


    public delegate void SimpleProxyDelegate(


      [MarshalAs(UnmanagedType.I4)] int i);


 


    //export an event because that is the only way we are going to get


    //LabVIEW to give use a delegate.


    public event SimpleProxyDelegate SimpleEvent;


 


    //extract the delegate from the event. If there was no event


    //registered, this will trigger an exception which is caught by the


    //LabVIEW .NET interface node. No need to worry about it here.


    public Delegate GetDelegate()


    {


      return SimpleEvent.GetInvocationList()[0];


    }


  }


}


This is all the additional code you really need to make it work. The delegate function has to mirror the native callback prototype for 100%. To achieve this you have to use the MarshalAs attribute on each delegate parameter to insure correct behavior.


The event simply has to exist because LabVIEW has only 1 way to produce a delegate for a VI, and that is by registering a VI to an event.


And even then, the delegate is never directly accessible, so we have to create a ‘GetDelegate’ method that extracts the registered delegate, and returns it to LabVIEW.


As far as the .NET framework is concerned, we could simply make the event and the GetDelegate method static to remove the need to instantiate an object that we won’t need for anything else. But there are 2 reasons why we don’t do that:


  • LabVIEW does not support registering VIs to static events.
  • Our GetDelegate function would need way to know which delegate you want to get, in case there are multiple instances of callback VIs. Since the event is tied to an instance, it is easy enough to keep track of the events in LabVIEW.

The LabVIEW code


LabVIEW code is graphical, so I have chosen to mark the different steps in the block diagram below, and then add an explanation of each step in a numbered list.


 


  1. Instantiate a new SimpleProxy Class. We only need it to access the event. Sadly, LabVIEW does not yet support static events.
  2. Register a VI for the Event. The way to do this is to wire the SimpleProxy refnum to the event source and select the appropriate event. Right-click ‘VI Ref’ and select ‘Create Callback VI’. This will automatically generate a VI with the correct input and output parameters for you. This will also automatically wire a static VI reference to the VI Ref input. User parameter can be any LabVIEW type at all that you want to pass to the VI when the .NET event is triggered. We don’t need it, but if we did we would have to wire it before we generate the callback VI because it changes the VI prototype.
  3. Now that the event is registered, we can simply get the .NET delegate that represents the VI callback out of it.
  4. Use a static member of the Marshal class to get a funtion pointer to the delegate. This is where the magic happens. A native stub is generated for the delegate at runtime. The unmanaged parameter list is built according to the Marshalling attributes that were attached to the delegate signature.
  5. Cast the Delegate IntPtr to an Int32. Pointers are 32 bit on the x386 platform. We need this 32 bit value to supply it to the DLL function call that expects a 32 bit function pointer. Note that this function pointer uses the standard calling convention by default (see below for more info).
  6. We no longer need the FunctionPointer IntPtr. As long as the delegate itself is still in memory, the native stub will be valid.
  7. Execute the DLL function. This function will use the supplied callback function pointer to execute the callback VI. It will pass the supplied int parameter to the callback vi at the time of the function call.
  8. Unregister the Callback VI from the event, and close the various references that were opened.

Running the test


As is sometimes said: the proof is in the pudding.


To verify that everything works as advertised, I have put a simple dialog box in the callback VI that displays the value of the integer parameter.


Simply run the example and you will see the following dialog box:



Advanced implementation


The principal solution works, but is not yet ideal. Most importantly, there is too much clutter on the LabVIEW diagram for this solution to be aesthetically pleasing.


There are too many manual actions that have to be repeated each time you implement a callback function. I also wanted to show you an example of a callback function that is a bit more complex.


Test DLL


Consider the following callback prototype that is a bit more complex than the previous one:


typedef void (__stdcall *fAdvancedCallBack)(


                  int i,


                  char* aString,


                  wchar_t* wString);


Here we need to supply an integer, an ASCII string and a UNICODE string. The function exported from the DLL has the following implementation:


void Advancedfoo(int i, char* aString, fAdvancedCallBack cb)


{


  BSTR uString;


  int aLength = lstrlenA(aString);


  int uLength = ::MultiByteToWideChar(


                      CP_ACP, 0, aString, aLength, 0, 0);


  if(0 < uLength)


  {


    uString = SysAllocStringByteLen(0, uLength);


    ::MultiByteToWideChar(


                      CP_ACP, 0, aString, aLength, uString, uLength);


  }


  else


    uString = L“”;


 


   //invoke the callback function through its function pointer.


  (cb)(i, aString, uString);


}


LabVIEW itself does not support UNICODE strings, so we manually transform the supplied ASCII string to UNICODE. Apart from that the function body only contains a function call via the supplied callback pointer.


 .NET code


  [UnmanagedFunctionPointerAttribute(CallingConvention.Cdecl)]


  public class AdvancedProxy : IProxy


  {


    public delegate int AdvancedProxyDelegate(


      //A simple I4 parameter. Nothing special


      [MarshalAs(UnmanagedType.I4)] int i,


      //An ASCII string. this will be marshalled as UNICODE on the


      //.NET side and ASCII on the unmanaged side.


      [MarshalAs(UnmanagedType.LPStr)] string aString,


      //A UNICODE string. This will be marshalled as UNICODE on the


      //.NET side, and UNICODE on the unmanaged side.


      [MarshalAs(UnmanagedType.LPWStr)] string wString);


   


    public event AdvancedProxyDelegate AdvancedEvent;


 


    public Delegate GetDelegate()


    {


      return AdvancedEvent.GetInvocationList()[0];


    }


  }


The meaning of the different parts is the same as with the SimpleProxy example. The only real difference – apart from the delegate signature of course – is that all classes that are used as a proxy between a LabVIEW VI and a function pointer should implement the interface IProxy.


This interface has the following definition:


  public interface IProxy


  {


    Delegate GetDelegate();


  }


This allows us to make the LabVIEW code for getting a function pointer generic, instead of tied to the type of the callback delegate.


LabVIEW code



As you can see, the LabVIEW code is vastly simplified.


  1. The only thing that we cannot make generic is the event registration, because the event and as a result the VI reference are strongly typed.
  2. The process of retrieving the function pointer is generic because all proxy classes now implement the IProxy class for retrieving the event delegate.
  3. The call to the test function in the DLL now also takes a string parameter that gets expanded to UNICODE inside the test function to supply the callback function with the correct parameters.
  4. Cleanup of the event and references is also generic.

There are 2 new VIs. These are generic, so you can reuse them without limitation. This also makes it easy to support an asynchronous callback mechanism. You can register the event and supply the function pointer to the DLL.


The DLL stores it inside, and uses it any time it needs to. Meanwhile, your LabVIEW program can continue processing. You can keep the references cluster around – for example in a buffer – until such time as you want to disable the callback mechanism again.


Running the demo code


The Visual studio solution is for VS2005, and can be built with all versions of VS2005.


The LabVIEW project uses version 8.2. You need at least version 8.0 to have decent .NET support, and I used 8.2 because that is the latest version at this moment.


To run the demo, simply extract the sources and open and build the Visual studio solution. Then open the LV project and run the sample of your liking.


Conclusion


This article has outlined a relatively simple way to directly execute a LabVIEW VI through a C style function pointer without having to write C or C++ code. The demo code for this article is available under the MIT license as usual.


You still need to provide a .NET class library, but the code involved is so minimal and trivial that it should not stop anyone.


This approach also makes it possible to use the wealth of win32 API functions that require a callback function without having to resort to writing C or C++ code.


In the time it took me to write this article (which took longer than writing the code, as usual) I found 3 other ways to achieve the same result.


One is an ugly kludge that only fakes the callback functionality. The other performs a true callback to a LabVIEW VI, but it is even uglier. The third one is fairly elegant but has some other limiting factors.


I will explain those 3 lesser solutions in another article. Despite the fact that only one of them is useful; the principle behind all 3 of them might be of use in another situation.


Finally, the whole LabVIEW and function pointer issue has existed for a long time among LabVIEW programmers. I have done a bit of research, and as far as I can tell, I am the first to ever solve this problem without requiring C or C++ code.


So here and now I claim to be the first to implement and publicize a working C style function pointer callback mechanism in LabVIEW!


In my next article I will follow up with an example involving structures (Clusters in LabVIEW) and n dimensional arrays.

Good riddance to /clr:oldSyntax

As I already mentioned in my blog post about tech-ed day 3, the C++ compiler switch /clr:oldSyntax will be deprecated with the release of Orcas.


This is a good thing.


What is ‘managed extensions for C++’?


If you know what Managed Extensions for C++ is, you can skip this section. If you don’t, then this is for you.


‘Managed extensions for C++’ is the syntax that made it possible to program managed .NET code in versions 2002 and 2003 of Visual C++.


The syntax was designed with the idea in mind to be as compatible with the existing ISO C++ standard as possible. For those of you who don’t know, this means among other things that you are constrained in the naming convention of your keywords.


Names of non-standard keywords have to start with a double underscore, e.g. __gc.


Any kind of  pointer is dereferenced with *, and you use & to pass the address of a variable.


There is a lot of other stuff, but this is the most visible part to the programmer.


Why does managed extensions suck


There are 2 major reasons why MC++ has a significant amount of – what we in the trade call – suckage.


It is butt-ugly


When programming, I don’t mind having to use the occasional keyword with leading __, because it draws attention to a special feature, and it is not too obtrusive anyway.


Now consider a source file that has __gc, __nogc, __pin __whatever on every other line. It is very obtrusive, it is a lot of syntactical noise, and it makes it very hard to read.


Now consider reading your way through thousands of lines of code like that.


It is ambiguous


An essential part of being able to maintain a code base is the ability to read it and to know what you are looking at.


MC++ fails this test miserably.


Consider the following example:


T * t = new T();


Is T managed or not? You cannot tell by looking at the code. You actively have to track down the declaration of T and find out.


Suppose you want to adapt come piece of code that uses T, but you have no idea what the type of T is.


This is a bit akin to a situation where you have screws and nails to join 2 pieces of wood together, but you can’t tell the difference by looking at them. You can either walk to the box they came from and read the label, or you just take the hammer, try to bash it into the wood and if that fails, it must have been a screw, right?


That is exactly what you would have to do if you want to adapt that piece of code, only the wood is the application, the screw or nail is your change, and the hammer is the compiler.


C++/CLI


Along came C++/CLI. It is the new C++ binding for writing .NET code. It has an elegant syntax, and is easy to use and learn.


It uses ^ and % for managed code instead of * and &, and gcnew instead of new. There are other differences, but they are all for the good.


Of course there are a couple of complainers that do not like the fact that it is an extension to the C++ syntax, instead of following the old rules.


What does /clr:oldSyntax do?


It is a compiler switch that allows Visual C++ 2005 to compile C++ managed code that uses the old syntax.


It is not perfect, and there are some subtle behavior changes compared to VC2003, but most of the time it works.


Why does /clr:oldSyntax have to die


That switch was created to give early adopters the time to migrate away from the MC++ syntax.


The reasons that this switch really has got to go away are:


  • A lot of toolkits and libraries are distributed as source code. If MC++ would remain a valid syntax, new code would be written every day, and the C++ source code world would eventually split in 2 pieces. This would be bad for code reuse, bad for people trying to learn C++ for .NET that would have to learn 2 completely unrelated languages.
  • It is not perfect anyway. In the public newsgroups I helped a few people migrate their codebase to VC++2005 using the /clr:oldSyntax, but this is only a syntax patch. There are a couple of nuances that make the behavior of the compiled code slightly different from what it used to be.
    Apart from that, there are a couple of bugs in it, so it is far from perfect.
  • It is a significant burden on the compiler from end. The compiler has to be able to parse 2 different syntaxes for doing the same thing. This makes it more complex, and could lead to bugs.
  • It is a significant cost to Microsoft, because the whole part of the front end that supports the switch has to be maintained, updated and tested with each new release of VC++. This is pretty stupid since it keeps a language feature alive that they want dead anyway.

Conclusion


Microsoft warned in 2003 that C++/CLI would replace MC++, so you can’t claim that you didn’t see this coming.


/clr:oldSyntax is dead. Get over it or fade away.


If you are one of the early adopters, this sucks. I know it does, and it should not have happened like this, but it did anyway.


Microsoft made a serious mistake by developing MC++ in the first place, but they have to make this hard decision.


It sucks for the early adopters, but if Microsoft does not remove the switch, they have to spend a lot of money to keep the few happy.


At the same time they would hurt the C++ .NET adoption by fragmenting the available technology. This would also split the community and cost the industry a load of money because programmers would have to learn 2 languages.


This would be a lose – lose situation, so they have no choice to bite the bullet. They cannot do anything else.

Tech-Ed developers Barcelona: Afterthoughts

Tech-ed developers 2006 has come and gone.


Traveling back home was uneventful, despite the fact that Barcelona airport has to be the most disorganized and chaotic airport I’ve ever seen.


At least I was at the correct airport. On my way to the entrance I talked to an Irish guy who had forgotten that Ryanair’s definition of Barcelona is ‘Barcelona, give or take 50 km’.


Was it worth it?


Yes. I have to say that I learned a great deal about upcoming technology like .NET 3.0, C# 3.0, the next C++ release etc.


Despite what you might think, going to a conference is hard work.


There were 5 1 hour 15 minute sessions per day. Since most of them were about new things, you really have to pay attention. In between sessions I made short summaries. This means I was constantly busy from 9 AM to 7 PM.


But it was a great experience. Nice people, free food and drink. Also, hanging out with the Microsoft VC++ people was fun. Being an MVP gave me access to lots of inside information.


The best speaker of the conference was Kate Gregory, with her session on extending C++ projects with managed code.


Other good sessions were all of the C++ sessions, the one about team studio for software developers, the one about CAS and the ones about .NET 3.0


To all the people who were out there, thanks for making tech-ed 2006 a success.

Tech-Ed developers Barcelona: Friday

Today is the last day of tech-ed. Hopefully I’ll arrive in Brussels somewhere around 23:00.

I really start to miss my wife and daughter, so I am glad I’ll see them again tonight.

Yesterday evening was an uneventful evening in my hotel room. I decided to start reading that book I bought on WPF to pass the time.

After a cup of the Happy Juice I make my way to the final C++ talk of this conference.

DEV407: Visual C++ new optimizations

This session is hosted by Ayman Shoukry of the VC++ compiler team.

There are a good 50 – 60 people here. Less than other C++ talks, but I suspect that the reason has something to do with the fact that it is 9:00 on a Friday morning, after the community dinner on Thursday evening. Another reason might be that this session has nothing to do with the language itself, or the .NET framework.

Still, not a bad turnout at all.

C++ optimizations is a very interesting topic. It is for this kind of high level stuff that I go to a conference. The coverage was really in-depth, and covered the available compiler optimizations in great detail.

All in all this was a very good presentation with a good Q&A.

Single file optimization

For single file optimization, the VC2005 compiler has improved by 20 – 30 % compared to VC6.0. This alone is reason enough to upgrade if you have computational intensive programs.

Whole program optimization

Whole program optimization basically means that the linker can inspect the object files to see which functions get called where. It the passes the object files back to the compiler to optimize them in certain ways. The major 2 improvements are cross module inlining and custom calling conventions.

Profile guided optimization

If you use PGO, the linker instruments your code with probes. The 2 major probe types are counting probes (how many times is a certain code path taken) and value probes (saves a histogram of values for e.g. function parameters).

Then you run common usage scenarios on your app. The captured data will tell the linker how you use the app, and how it can optimize. It can then create a new binary, based on this info.

Btw, an instrumentized app (with all the probes) is slow. Very very slow. Think about a snail on valium, paralyzed from the neck down, crawling uphill. But that doesn’t matter, because the instrumentized app is used only for analysis.

PGO can yield 20% increases in some cases, because the default code path gets optimized really well. This will of course decrease performance on the non-default path, but since that is non-default, it shouldn’t matter that much.

PGO can use the following mechanisms for optimizing performance:

  • Inlining: the linker examines the call graph, and can decide to inline functions at each call site separately.
  • Switch expansion: the linker can extract cases from a switch statement if they are executed often.
  • Code separation: this will move blocks of code so that the common code path always falls through, and moves uncommon code paths behind it.
  • Virtual call speculation: this will introduce a type check whenever a base class is used to call a virtual method. If the class is the same in 90% of the cases, it is quicker to check the type and call the direct method than to execute the virtual call via the base class.
  • Partial inlining: this will inline only those parts of a function that is in the hot path, while leaving the cold part in a function.

One thing that is worth to remember with PGO is this:

There are some situations in which you should be able reproduce the exact same binary image from your source code.

E.g in the space industry, if ESA want you to rebuild version X.Y.Z of an application, it had better be the exact same binary image. It is simply not allowed to build a version that might have slightly different timing behavior because that would mandate a complete re run of all acceptance tests.

This means that if you have such a requirement, you have to put the *.pgc files under source control as well, so that the linker can always reuse a known set of instrumentation data to create an exact replica of what you shipped earlier.

Floating point optimization

/Op will be deprecated. The documentation states that it enables greater floating point consistency. The problem is that nobody really understands what it does. Not even the compiler folks themselves. And the old floating point model was outdated anyway.

There are 4 new floating points models that are designed to give you better control of floating point behavior.

  • /fp:precise: this is the default, and it gives you a compromise between speed and precision.
  • /fp:fast: this will yield the fastest code, but you are not guaranteed the last digit of precision. Whether this is acceptable or not depends on the application.
  • /fp:except: this will cause fp code to throw exceptions on the exact line on which they should occur, whenever there should be an exception.
  • /fp:strict: this implies /fp:except, and will yield the best precision possible, but with a performance penalty compared to /fp:fast or /fp:precise.

Now, if – at this moment – you are thinking ‘OMG M$FT is shipping bad code’ then this is the point where I tell you to calm down and sing kumbaya.

In case you didn’t know, there is a fundamental problem with floating point math. Not all numbers are representable in the IEEE floating point model.

One problem is that 1e20 + 1 is still 1e20 using the IEEE fp, because the mathematical result is simply not representable in that format. Hence the result defaults to the closest number that is representable, which is 1e20 again.

Another problem is that, while A + B + C is equal to C + B + A on a mathematical level, this is not true at code level. It all depends on the precision and the order of magnitude of A, B, C and the intermediate results.

This issue is there for ALL compilers out there, not only C++, but also in C#, Java or whatever language you are programming in.

Floating point math is extremely hard to understand, and the value of a result depends on the exact sequence of instructions. Given an equation, it is very hard to determine if the result is correct or not, because correct’ has a fuzzy boundary in the IEEE floating point model. You can only say that code is correct for a given definition of ‘correct’.

Private talk about Interop

Craig Kitterman (MSFT) invited me to a private talk about the interoperability community and what I could contribute to it.

There is no interesting session in this slot anyway, so I decide to accept.

This is MVP stuff so I won’t tell more of it.

DEV004: building a distributed application with .NET 3.0

This session is hosted by Christian Weyer.

It is to be a demo of building a distributed application. It sounds interesting enough to spend my time here.

It is a level 400 session so I hope I can keep up.

Turns out I can’t.

The speaker is very good, but the application is already finished, and he is demonstrating the architecture and talking about things like service contracts etc. A good talk, but wasted on me.

Goodbye

The last session slot does not have anything that could interest me. This makes sense because a lot of people are leaving already, and they would not want to waste A level sessions on this session slot.

I think I’ll catch an early ride to the airport and do some reading. The battery on my laptop is past its peak, so I can only use it for an hour anymore without needing AC anyway.

I’ll post my afterthoughts on tech-ed somewhere next week.

Tech-Ed developers Barcelona: Thursday

Yesterday evening my Tech-ed issued tram card decided to die on me. I hadn’t noticed, but as luck would have it, a bunch of auditors entered the tram, accompanied by some tough looking stewards with handcuffs and batons.

My card proved to be invalid, and the auditor said something about the magnetic band failing. He wanted to replace it with a new one, and I tried to tell him that I should get one with at least as many rides left as I would normally have.

At this point I should point out that he did not speak a word of English, and I obviously don’t speak Spanish. Luckily he decided to do the wise thing and gave me my card back, gesturing ‘it’s OK’.

This saved us both an hour of frustration, trying to understand each other and solving this in a way that would satisfy us both. The old ‘I am a dumb but innocent tourist’ trick still does the job.

Since I only have to take the tram 2 more times I’ll just keep on using it, instead of trying to get it replaced.

So after a healthy breakfast I now arrive at the convention center to get my first cup of coffee for the day. In fact the coffee is so good that I call them little cups of happiness.

The fact that I call them that does NOT mean that I am addicted, ok?! Now let me get to the caffeine altar to get my cup of happiness before going to the first session.

DEV321: Delving into VS team system for software developers

This talk is hosted by Brian Randell, and it covers Visual Studio Team System for Developers. We don’t use this at my company (because it costs $$$, and it is not our core business) but I have it through my MSDN subscription, and I figured I’d see what it was all about.

Brian is another good speaker, and keeps the tone light while covering all different topics. It is mostly a succession of demos, glued together by a minimum of slides.

The key idea is that testing alone is not enough to guarantee a high quality app, and there were some statistics to back this up, along with a graph that showed that cost of a bug increases the later it is found.

I’ll just go over the different topics in sequence.

  1. Unit testing of managed code via VS, and generating the unit tests automatically. This is a really cool feature, and it can save you lots of time in setting up your test harnesses. Unfortunately this feature is not available for unmanaged C++.
  2. Code coverage. This tool can show you how much of your code was really executed in your tests. This is useful to determine the amount of dead code in your app, as well as making sure that your tests cover most of the code you’ve written.
  3. Static analysis. This allows you to analyze the source code itself to look for things that look suspicious. For managed code you use FxCop, for native C++ code you use PREfast.
  4. dynamic code analysis. This is only available for native C++ applications, and it checks for heap violations, handle violations and locking errors while your code is running, and presents you with a list of problem spots that you need to correct.
  5. Code profiling. This allows you to take performance measurements while your program is running to identify performance hotspots in your code. This can be done though sampling (which you do to get a first rough idea) and instrumentation (which you do get detailed results).
    You don’t use instrumentation from the beginning because that would fill up your memory and hard drive very quickly.

Overall, this was a very good session that gave me a good idea about the capabilities of team studio for developers.

I think I’ll start using at least the analysis tools for some of my larger projects. Unit testing is less of an option because most of my code is unmanaged, but I might do it on some C# code just to get familiar.

All in all this was another very good session.

DEV365: porting .NET applications to 64 bit Xeon and Itanium

This session is hosted by Samah Tawfik of Intel Corporation.

This was another session I really wanted to see because 64 bit is going to become very real within the next year, and it is a good idea to learn about it so that I know what this will mean for me.

Not that I see myself developing for Itanium any time soon, but you never know what might happen, and chances are I’ll buy a dual or quad core Xeon in the beginning of next year, this this might be useful at a personal level too, because who doesn’t write his own custom apps for automating household tasks, right?

The architectural difference between 64 bit Xeon vs 32 bit Xeon can be described as:

  • Extra memory space
  • Extra registers
  • Double precision ints
  • Flat virtual address space
  • 32 legacy mode, 64/32 compatibility mode, 64/64 mode

Then there was an architectural comparison between x64 and ia64.

Basically, the major reasons to upgrade to 64 bit are the extra memory space, and the extra registers and instructions that allow compilers to take advantage of extra features.

There was a benchmark that actually showed to 64 compiled code can be significantly slower than 32 bit code for smaller loads. The reason is that the 64 bit code is not yet able to take advantage of all the 64 bit features while still carrying the overhead of doing so.

Intel has a suite of tools that allow you to design, develop and debug application that take full advantage of 64 bit features and parallelism, and they integrate fully with Visual Studio.

For targeting .NET 64 bit you basically don’t have to care whether if will run on 32 or 64 bit, as long as you are not using P/Invoke or unsafe things. In that case you should set the CPU type to restrict the .NET assembly to run only in 32 or 64 bit mode. Not doing so can just crash your app because native DLLs are only loaded at runtime when a native function is executed for the first time.

The presentation itself was not very good. It was not bad either, it was just OK. This is not criticism towards Samah. I have done presentations like this myself since a long time, and I started out being very bad, progressed to mediocre and have now reached a point where I am just average.

Speaking in front of an audience and connecting with it is hard, and it takes a lot of skill. Not everybody is able to do this as good.

The Q&A session was very good however, and Samah handled difficult questions very well, and gave clear and concise answers. She knew what she was talking about and was well prepared.

One thing I learned was that if an app is only very computational intensive, it might not give you any advantage to 64 bit, at least if it is 64 bit .NET code. Just going 64 bit is not a magic bullet

Native code on the other hand would really be able to take advantage of 64 cpus because the native compiler can really optimize algorithms and play with registers and pipelines in a way that is not possible for the .NET JIT.

DEV320:  What’s new in Visual C++ ‘Orcas’

This talk is hosted by Steve Teixeira, and we are all glad that he has his luggage back by now.

This conference room is actually one of the bigger ones, with room for roughly 200 people. It is slowly filling up but I suspect that it is not going to get very crowded, partially because this is session is about what’s coming in the future, instead of using what there is right now.

Still, there are at least a 100 people here.

Before I go any further I need to stress that Steve mentioned that the list of new featured is not set in stone, nor is there a fixed release date for Orcas. There is a very good chance that the featured mentioned below make it to Orcas, but there the feature list is not yet publicly committed to.

The Visual C++ mission is to enable provide world-class native tools while bridging next-gen technologies.

There are basically 3 major types of project that need C++

  • Projects that have to be crossplatform.
  • Projects that have a large existing C++ codebase.
  • Projects that need a large degree control over runtime behavior.

To cater to these different types of projects, Orcas VC++ is moving in the following directions:

  • Support platform technology and renew investment in unmanaged libs
    • New development for MFC libraries.
    • Making VC a good Vista LUA citizen
    • Supporting the Vista SDK
  • Advance interop with managed code
    • STL/CLR template library to provide a very easy template-based means of converting managed typed to unmanaged types and vice-versa
  • Developer agility:
    • Improve compiler throughput by enabling concurrent compilation of cpp files, as well as enabling the incremental build of mixed mode solutions by looking at assembly meta data, and only consider files changed if the meta data they generate has changed.
    • Allow targeting of multiple .NET platforms so that using Orcas doe not mean you have to upgrade to NET 3.x
    • Deliver a new C++ class designer

There is of course also a list of things that is going to be cut from Orcas:

  • ATL Server: this is going to split off from VC, and converted into a shared source project, kind of like what happened with WTL.
  • /clr:oldSyntax: it is going to be removed. With Orcas it will no longer be possible to compile the old Managed Extensions for C++ code. This is a good thing (I will write a separate article on my blog about this).
  • Pre Windows2000 targets are deprecated. Another ‘thank God’. Windows 98 was very good at the time it was released. It was much better than Windows 95. However, it is severely limited by the fact that the win32 API and the Windows system itself have evolved to the NT family of systems. Functionally, Windows 2000 was the marriage between the 9x series and NT4.0.
    Windows Me is what happens if you extend a design beyond the parameters it was designed for. It should never have been. It is unstable, and all new functionality has been grafted on in a way that makes it look like the Frankenstein of the operating systems.
    By removing support for those systems, large parts of the MFC library and other libraries and SDKs can significantly be cleaned up and reduced in code size.
  • /Wp64: this is a compiler switch that you can use to warn for 64 bit portability errors. It was introduced at a time when the 64 bit compiler did not yet ship.
    Since the 64 bit compiler does ship, it makes no sense to support this switch anymore. If you want to know if there are 64 bit compilation issues, just compile with the 64 bit compiler.

Steve also explained that generally, things are deprecated in one release, and then removed in the next release.

However, this is not going to happen with all the unsafe C runtime functions that are now marked as deprecated.

Basically, Microsoft introduced a ‘safe’ version for each function in the C runtime which is susceptible to buffer overflow problems, and they needed a way to tell the programmers that you could better use the new functions instead of the old ones.

Unfortunately someone decided that this mechanism already exist in the form of the deprecating pragma and decided to use that, much to the horror of many C++ programmers.

The VC++ team has recognized the unfortunate message that they delivered, and this message will be changed in the Orcas release to give a better, less hostile message that does not contains the word ‘deprecated’.

The C runtime functions which are now marked deprecated will NEVER be removed from VC++. Because they are part of the standard C/C++ runtime.

All in all this was another high quality lecture.

DEV326: Not faster processors but more processors

There weren’t a whole lot of other options, so I chose to go to this session. My only other choice would have been DEV359: .NET hidden treasures, but after reading the intro I decided that since I already knew 3 of the 4 treasures they mentions, the session would probably be just a waste of my time.

Anyway, this session is hosted by Carl Franklin.

Great. This entire presentation is going to be code demos (yaay) but it is done in VB.NET. (barf). Couldn’t they at least have mentioned this in the session description?

I’m out of here.

DEV356: Using OpenMP and MSMPI to develop parallel high performance apps

This session is hosted by Saptak Sen.

Since the topic is OpenMP it is very unlikely to cover VB.NET. Now don’t get me wrong, the market for VB.NET is still huge, and I don’t belittle the people who use it, but I really don’t like the verboseness of the language.

Anyhow…

Windows 2003 Compute Cluster Server (WCCS) has the following goals / key concepts behind it:

  • Simplified deployment and submission and monitoring of jobs.
  • Leveraging existing knowledge and infrastructure to simplify HPC.
  • Allow programmers to use a familiar development environment.

WCCS is actually made up of Windows 2003 Cluster edition, and the Compute Cluster Pack (CCP).

The OS is used to manage the hardware and to provide a high bandwidth low latency interconnect.

CCP contains the support for standard MPI, job scheduler and CCS SDK.

OpenMP is supported by Visual Studio Directly, but only in C++. You can use OpenMP in a .NET application, but only if you use Visual C++. This is one of the areas in which C++ gets to have the cake.

OpenMP can get you big gains in the parallelization of long loops without loop dependencies.

The number of OpenMP thread can be set statically at compile time, or dynamically at runtime.

MSMPI is a networked protocol that allows you to distribute tasks by sending messages to different nodes in the HPC network. There is of course more to it than that, but that’s the gist of it.

There is a lot of scheduling, security and other stuff going on, but functionally, nodes can send messages to each other that trigger them to do something, after which they return the results. It is more complicated than OpenMP, but easier than programming everything yourself using Windows sockets.

The presentation wasn’t bad, but this technology is not likely to be something that I will ever use, except perhaps the OpenMP stuff, that might be worth diving into to learn the basics of it.

 

Tech-Ed developers Barcelona : Wednesday

So yesterday I went out with Ayman Shoukry (VC product manager), Steve Texeira (VC Product Group manager), Marcus Heege (VC MVP), Siddhartha Rao (VC MVP) and an Architect from MS whose name I forgot. I’ll update when I’ve had a chance to ask Steve.

It took us a while to find a restaurant, mainly because most proposals were vetoed by one person or another, but in the end we found a nice Italian restaurant. Well, it was either that or buy a large bag of candy at point.

We had a very nice evening, had nice food, had a few beers and talked about things like compiler trivia, politics and funny on-the-job stories. They were all very friendly people. I had a really nice evening.

Time flew and I left shortly after 23:00 because the last tram to my hotel area was at 23:45, and I did not fancy having to walk the entire distance or finding a cab.

So my night was a bit short, but luckily I was able to start with the breakfast of champions. The only thing better than having a plate full of bacon in the morning is of course having 2 plates of bacon.

I did not take the coffee anymore because it was very foul stuff, whereas the coffee at the convention centre is very good.

But I did drink 2 glasses of orange juice, so that counts as vitamins.

DEV323: C# 3.0, Future language innovations

This session was again hosted by Anders Hejlsberg.

As you might guess from the title, it was about new features of C#3.0.

  1. C#3.0 is completely backwards compatible with C# 2 and C#1. Anything that compiles with earlier compiler will compiler with C#3 without problems.
  2. contextual keywords: new features like LINQ use special keywords to create statements. But these are only keywords in the statement in which they are used. This means that your code won’t break if you have a variable named ‘from’ or ‘select’.
  3. Local variable type inference: using the ‘var’ keyword, you tell the compiler to take the type from the right hand side, and place it on the left hand side, like this:
    var myVar = new Array<int>();
    Note that myVar is still strongly typed. The compiler just saves you the trouble from having to type in long type names twice.
  4. Anonymous types: this allows you that have strongly typed variables of which the type is inferred by the compiler at compile time by looking at what you are constructing it with.
  5. extension methods: the new ability to write extension methods allow you to define new instance methods for existing classes without changing those classes in any way. Very powerful feature.
  6. Lambda expressions: I am a novice in this area, but if I understood it correctly, lambda expressions allow you to pass code as data, removing the need to explicitly create delegates to do something inline.
  7. Tied to lambda expressions are expression trees. These make it possible to turn lambda expressions into expression trees that can then e.g. be parsed by a SQL generator to generate code for interacting with a SQL database.
  8. Object initializers: these allow you to initialize objects when they are contructed by supplying values for public fields and properties.
  9. tied to object initializers are collection initializers. These can be initialized at construction by supplying the data that has to be put in them. A requirement is that the collection has to implement IEnumerable, and have a public Add method.

All these powerful features together are used to build LINQ (see my blog from yesterday). Any incorrectness in my explanation is of course my own. I am pretty sure that Anders explained it correctly, but I am not that sure that I understood everything correctly after seeing it only 1 time.

DEV205: Code Access Security

To be honest, I am only attending this session because there is nothing else in particular that I wanted to see.

It was either this or DEV339: Creating Windows and Browser apps with WPF. However, I know virtually nothing about Code Access Security (CAS), whereas I already know a bit about WPF, and this should be easy enough to figure out by myself.

The nature of the software that I write for customers does not call for security measures. We typically have administrator access to the machines that the software runs on, and we are allowed to add firewall rules, do a custom DCOM or .NET security configuration, or change folder permissions to allow our applications full control.

This session is hosted by Keith Brown. For those who know Dutch cabaretiers, his sense of humor resembles that of Bert Vischer J, though not as ADHD of course.

Code from remote locations is run in a sandbox, and is not allowed to do much.

CAS protects the user from the software, rather than protecting the software from the user. CAS is there to insure that an application cannot do things that you do not entirely trust them to do.

For example, to call any native code or use the ‘unsafe’ keyword, your assembly needs to be granted FullTrust. Anything less and it will trigger a security exception.

Btw, A really cool toy is LUTZ reflector. It is a dis-assembler for .NET assemblies that will show you what the original source code looks like. You can use this tool for example to read the source code from the .NET framework classes.

Keith talked a bit more about using isolated storage and not using strong names for security because they aren’t. If you want to use strong names for versioning, you always have to use the same private key. Any cryptographic algorithm of which the private key cannot conveniently be changed is not secure.

On top of that, there is no way to recall lost or compromised keys, so string names should be thought of only in the context of versioning.

Apart from the general talk about dealing with running in partially trusted zones, Keith pointed out the assembly attribute AllowPartiallyTrustedCallers.

When used in a class library, this attribute tells the JIT that the library designers assert that this library is not doing anything that could be used as an attack vector for a malicious application.

While of course being convenient to use by the application, this also means that the library designer has to be really sure that it is indeed safe to be used in an unsafe environment. To be good, this should mean that the library developers do design analysis and security auditing etc.

DEVWD15: Hardcore .NET production debugging

This is one of the sessions that I really wanted to see because it covers –among other things – WinDbg – which is an incredibly powerful debugger.

Ingo is a very energetic speaker, and he really covered a lot of ground using different debuggers.

As with the C++ whiteboard discussion yesterday, the room was packed 10 minutes in advance, so we got an early start.

Ingo explained the basic debugging commands, and how to use them to debug applications that would not start, or leaking memory etc.

I am not going to make an overview of the different techniques over here. One reason is that I have forgotten the exact sequence of commands and actions for most scenarios by now. Another reason is that there is so much to write that I would be writing a complete article on debugging, which is not my intention right now.

I certainly learned a lot in a very short time.

Marcus’ generous offer

This meeting was attended by Marcus Heege as well. The people from Microsoft told him that they had to turn away more than 50 people during his session because the room was already full. Because of this, they asked him to repeated that session again on Thursday.

Unfortunately, he had to leave today so he asked if I wanted to host that session in his place.

However, it was supposed to be an ad-hoc whiteboard discussion. This posed the following problems:

  1. I had nothing to start from (no written presentation to take over)
  2. I had absolutely not prepared myself to do this. Extending existing projects with .NET code is an advanced topic. I know how to find my way through that forest if I am sitting behind my PC with enough time to search for solutions to the inevitable problems that will arise.
    Imagine I am up there and there are people asking questions like ‘Why does my application crash / hang when COM does this and .NET is used like that.
    Going ‘Erhhhhm….’ At that point would make me look foolish at that point, as well as letting down the expectations of the people in the session.

It was very kind of Marcus to suggest this, but he is an expert on this topic, and I am not.

I thought hard about it, but with great reluctance I decided to decline.

If it was merely the intention for me to give a presentation I would have taken the opportunity to go out there and do it because then I could have taken over an existing presentation + demo, practice it a bit and go Live(tm) with it.

As it was, this was just too dangerous, and too likely to end in a big disappointment for both myself, the audience and the tech-ed people.

DEV340: building data driver applications with WPF

This session was hosted by Ian Griffith.

I only attended this session because there was nothing else of interest to me, and I thought it might be nice to get some more information on this topic.

Unfortunately, I do not know more than just the basics on data driven apps.

His presentation quickly went into data binding, data context and other things that I have never used in an application, so I was out of my league very quickly.

That is why I decided to leave the presentation after half an hour so that I could write up my other experiences of this day so far.

I am sure that the quality of Ian’s presentation was OK, but it just was not a good topic for me.

Btw, me leaving early also gives me the chance to write this stuff and still be in time for the next presentation on C++/CLI.

It is delivered in room 119, which is one of the smaller ones. Given the success of Marcus’ talk, I expect that room to be packed very early by lots of interested persons.

Microsoft really needs to allocate bigger conference rooms for C++ sessions because interest is high.

DEV406: extending native C++ applications with managed code

WOW.

This was just the best session I have seen this tech-ed, or possibly ever.

Kate Gregory is a gifted speaker and can keep a fast pace AND bring a clear explanation at the same time. This 1.5 hours just flew by.

This session was about the different ways in which you can move large C++ code bases to use parts of the .NET framework.

It started with architectural explanation of what this means, and then showed a number of scenarios.

If this topic interests you at all, find the powerpoint presentation on the tech-ed site because they are of high quality. The presentation is of such high quality that you can use it actually as a checklist in adding managed code to existing projects, as well as getting a better fundamental understanding of this topic.

It is also worth to note that there were a good 150 people in this room with me, leaving only 3 or 3 empty seats. As I mentioned before, C++/CLI is a hot topic.

Tech-Ed developers Barcelona: Tuesday

After a healthy breakfast (Bacon and coffee) I was off to the convention centre. The weather is nice.

Wireless access is a bit slow, but I suspect that this has something to do with hundreds of users sharing a 11 Mbs network.

Keynote

It’s a keynote. You see one, you’ve seen them all.

This keynote focused on Technology, and what it can to for people.

There was an interesting demo from an 11 year old Pakistani girl named Arfa. She is certified in Windows C# applications and ASP.NET. She was very bright, and was learning technology to help people in her country by bringing technology to Pakistan. She was very young, but she was very mature also. I guess she did a lot of growing up in a very short time.

There was also some talk about the Imagine cup, a programming contest for students to design solutions for helping people with certain problems.

Then the real keynote by Eric Rudder started. He emphasized on integration of information between different devices, as well as connecting business with people, information and processes.

There was a demo of an integration exercise between Visual Studio and .NET3.0, Office and Ajax.

It was nice, but a bit long and lots of yada yada yada, integration, bla bla… It wasn’t bad, but I am here for technology, not marketing.

Then there was a demo of the LINQ technology in .NET 3.0 by Anders Hejlsberg. LINQ is impressive. It allows you to using SQLish syntax in source code to bind and manipulate information from all sorts of data sources and use them like you would use other native C# objects.

Another nifty feature is that you can copy XML data, and then paste it as the C# code to generate that data.

The funny thing was that Eric Rudder started his keynote with the quote ‘May you live in interesting times’. After which he said that these are interesting times indeed, and a lot is happening. He attributed this quote to Robert Kennedy.

Apparently, he did not know that that quote is the first half of an ancient Chinese proverb. The full quote is ‘May you live in interesting times, and attract the attention of the emperor.’

He also did not know that this is actually a curse wrapped in nice words. The interesting things refer to unrest, arrows being shot at you and general unpleasantness.

The attention part refers to you being enough of a nuisance to attract the attention of the emperor, after which your future tends to be very short and eventful. E.g hanging up side down in the scorpion pit or something else.

With that in mind, I wonder what we can really expect of Vista and the next generation of Visual studio, office and .NET.

DEV201 – Introduction to .NET 3.0 (Formerly WinFX)

This session is packed. A lot of people want to know what’s going to hit them with .NET 3.0.

This session is hosted by Dave Webster, who is a gifted speaker. He is one of those people who make it look easy to stand in front of a demanding crowd and talk about stuff.

The first thing (which I didn’t know) is that .NET 3.0 is really .NET 2.0 with some added features. When you install .NET 3.0, It will install .NET 2.0, and then install the WCF, WF, WPF and cardspace components in a .NET 3.0 folder.

WPF

Windows Presentation Foundation (WPF) is going to take UI design to the next level. It is going to break away the difference between ‘slow’ web application with strong branding (think www.virgin-express.com) and powerfull windows application with little branding (think any windows application that is not themed with lots of effort).

The way WPF does this is by providing a set of controls that directly interact with the Graphics card in your system. Hence the need for a good GPU if you want to run WPF.

There was a demo of the NY Times website that is really a web application with XAML mark up that looked like an actual newspaper.

This is very important, because we expect data to be presented in a familiar way.

This also showed that in the future, these types of application will be developed by a combination of a developer (who has to know how to code) and a visual designer (who has an expertise in UI design and markup).

My personal experience is that these 2 can almost never be the same person. The visual capabilities of WPF are astounding, but much too much to describe in much detail here.

WCF

Windows Communication Foundation (WCF) covers everything to do with remoting, messaging and transactional actions.

The high level idea is that everything is a message between endpoints. The WCF stack covers multiple layers.

  • Address layer: where do messages go or come from.
  • Binding layer: how is the data contained in the message.
  • Content layer: what is in the message.

It really makes distributed systems (like client server applications) much easier to write and design.

WF

Windows Workflow Foundation was first abbreviated as WWF, but to avoid confusion with the Word Wildlife Fund and the World Wrestling Foundation, was re-branded as (WF).

WF makes it possible for high level architects to design systems graphically as a process flow. This means that something happens, and there is a known process for handling the event or data.

What makes this different from designing a regular app is that it is very easy to integrate services and components from different machines, add in scheduling and user management, and all sorts of special behavior like a systems wide process flow chart.

This also allows high level architects to design entire systems without having to know much of the low level details of each component. Other developers can then use their in depths knowledge to implement the functionality of custom process steps.

Cardspace

Cardspace is the technology that will be used as an identity providing system for the internet.

Microsoft recognizes that the whole PassPort thing was essentially a flop because they didn’t realize at the time that institutions like banks would never start using something that was designed and maintained by Microsoft only.

The technology in CardSpace has been designed by a consortium of major industry players like IBM, Sun, Microsoft and others. This makes internet wide adoption much more likely. And hopefully this will happen over the next couple of years.

An interesting thing to mention is that the Cardspace stack (everything that you have to do to manage it) uses its own private desktop session. This means that no other applications on your system can access the CardSpace software.

This is essentially the same way that your Windows logon box is protected. Whenever you hit Ctrl + Alt + Delete, your get transferred to a desktop session that can only be accessed by winlogon.exe.

DEV225: WPF introduction

I had to choose between this session and DEV220: Overview of Visual Studio Extensibility and the Visual Studio Extensibility Program.

That looked really interesting as well because it would give me some more information on how to create project templates for C++. However, that topic is interesting on a personal level, but it’s not like I would need it much for my work, so I decided to learn some more on WPF instead.

This session is hosted by Mike Pelton of PollyTiles. The guy is a good speaker, and has a good sense of humor. Starting off a presentation by mentioning the John Cleese parrot sketch shows that you know what humor is about.

As for WPF: it is really an interesting technology. The key idea is that the functionality of your application is contained in code files, while the visual style of your application is contained in *.xaml files.

The reasoning is that by working like this, it becomes much easier for the programmers to care only about what an application has to do, while allowing the visual designers complete freedom to modify the visual style of the application.

Styles can contain a gazillion visual effects, as well as time lines. This allows for creating any arbitrary effects that you might want on your user interface.

Designers can use separate tools for creating those styles. These tools can be much more powerful than the default VC2005 editor, and resemble professional graphics tools like photoshop etc.

There is little more that I can explain but remember those user interfaces from Hollywood films of which we – professional developers – say ‘Haha, as if’?

Well WPF makes it almost trivial to design applications like that. The only boundary is your imagination. This also shows that the developer becomes the limiting factor. I can perfectly well make sure that an application does what it is supposed to do, but to make it look cool you really, really need a graphics designer.

DEV223: The .NET Language Integrated Query (LINQ) overview

This session was hosted by Ander Heljsberg. IT was a good session. No marketing fluff at all, just technical content.

Language Integrated Query (LINQ) is one of the most powerful advances I have ever seen in software tech.

LINQ solves the whole data!= objects problem.

LINQ supports an SQL like syntax that gets translated to extension methods and lambda functions by the compiler. Lots of buzzwords there, but what it really means is that the compiler will translate the SQLish syntax to true .NET code without requiring any modifications to the data in which those queries are executed.

This allows you to use simple syntax to filter and join data from all sorts of collections: arrays, tables, XML datuments, relational databases….

One of the key improvements here is that C# 3.0 supports anonymous classes. The compiler will build anonymous classes based on the LINQ query that you have written. This enables you to use both relational and hierarchical result sets in a strongly types way.

Anything you do with that data later on will be verified by the compiler because it knows what the result looks like, so it knows whether what you are trying to do with it is possible or not.

It is worth to note that the underlying queries are not executed until you look at the data that would be the result of it.

This means that you can have a result set the size of a database, and yet only execute true queries the moment you look at a piece of data. And even then, the only queries that will be executed will be for that specific data.

At this moment it is possible to work with data from .NET objects, from relational data sources and from XML.

Another good thing is that the low level plumbing that makes this all possible uses an open API that simply has a default Microsoft implementation. Anyone can insert his own plumbing (should you ever want to), and anyone can also create an interface that can work with LINQ.

One example of this is that apparently, someone is working on a LINQ interface to amazon.com, enabling you to perform LINQ queries on the amazon inventory, with all the power of LINQ at your disposal.

Another thing I found impressive is that there is a tool (forgot its name) that allows you to automatically create a complete .NET API for a SQL database.

This was only an introduction session to LINQ, so I do not have more than just this general understanding of it, but if you are a developer who uses C# or VB, be sure to get at least a basic working experience with it, because a) this technology will be big, and b) it can save you enormous amounts of time if you have to do complex things with in-memory data.

DEVWD13: Extending C++ Projects to .NET

This session was hosted by Marcus Heege, whom I know from the Microsoft private and public newsgroups. Being a VC++ MVP, this is one of the few sessions that I really wanted to see.

The session topic was extending existing C++ code bases with .NET functionality.

There weren’t any things discussed that I did not yet know, but there are some other interesting facts.

  1. The room was packed. C++ is not dead by a long shot. Public interest is big because the installed base for C++ is very large.
  2. The general public – even if they are aware of C++/CLI – know almost nothing about it. Microsoft needs to push more information to the public.

As for interop with existing code, there are a number of options, but the 2 good ones are

1.      Add new cpp files, put C++/CLI code in there and compile only those with /clr

2.      Put all managed code in a DLL, and export functions from it that use C++/CLI internally.

After the talk I had a lengthy discussion about VC++ with Ayman Shoukry (MSFT) and Marcus, but unfortunately I cannot blog about that because pretty much everything about it is under NDA.

At this moment I am at the MS booth with Marcus, Ayman and Steve Texeira, and we’re probably going out for dinner later.

Tech-Ed developers Barcelona: Monday

The airport

It seems that today, there is a new set of regulations regarding the transport of liquids and gels in hand luggage.

After being asked whether I had possibly nasal spray, shaving cream or beverages on me for the 5th time I was allowed to check in. Then I discovered that I also was not allowed to take any non-essential medication in my hand luggage.

Quite what lethal mixture I could made by grinding my nurofen, motilium and ercefuryl together I don’t know, but this is apparently a dangerous thing to do.

Good thing that terrorists are not clever enough to make sure they carry a prescription for whatever medication they are going to use to make high explosives.

While I was waiting at the gate I discovered a jar of Vicks Vaporub in my laptop bag that I had forgotten about. Luckily it failed to explode in mid-flight. At least we would have gone down with the smell of pine in our noses.

The flight itself was uneventful, as could be expected and hoped for of course.

Arrival

Some beautiful ladies from the Microsoft event crew were waiting for me at the arrivals gate, and I was soon put on a shuttle bus that brought me to the convention centre.

At the registration I received a nice looking laptop bag with some folders, a couple of DVDs and a t-shirt in it. The bag itself is a high quality item so that will come in handy as well. Take that, National Instruments with your grocery store plastic bag J

I did not register for the pre-conference sessions because I only arrived at 15.00, so there was little point in spending 200 $ for the 1 remaining session.

After some directions from the very friendly stewardesses I was able to find my way to the tram station. I got off the tram near Glories, which is some sort of monumental building project.

By now I should mention that I travel alone on this occasion, and my sense of direction is extremely bad. If there would be a world championship in the category ‘getting lost’ I would not even turn up.

After some confusion and the help from a couple of friendly passers-by, I was able to find the hotel. I could have gotten there a bit earlier, if not for the fact that I took the long way around.

MVP Dinner

Microsoft had rented a tapas bar for the occasion, close to the convention centre.

The food was good, and the company was great. I was sitting next to a Sharepoint MVP called Joris, and MS Developer Evangelist David Boschmans.

Being who we are, the conversation centered mostly around developer topics, and Microsoft in general.

Anyhow, back to the hotel at 22:30. trying to get some sleep after a tiring day.

Why you should shave with a straight razor

Recently I have changed back to the good ‘Ole throat cutter, and so should you.


The money


Do you know what disposable razors cost these days? 10 euros a pack is no exception and you need more than a pack a year, even if you have very soft facial hair like me.


A plain straight razor costs 50 euros if you buy a first grade new one. It will outlive you if properly maintained.


You can also buy second hand straight razors for a bargain price. Just make sure they have an undamaged blade. A nicked blade will hurt you.


The environment


Disposables are made of special alloys that contain rare metals. Some even contain traces of platinum. That is of course one reason they are so expensive. They need to be, otherwise they would be blunt after one shave.


Consider how many of these you need per year. Now consider how many years you use them during your lifetime. That is a fairly big pile of razors.


Now consider 100s of millions of adult males, and the combined Mt. Everest of discarded razors.


They could be recycled if you would throw bring them to the scrap yard, but instead, almost everyone throws them in the trash bin. They end up in a landfill, polluting the environment and contributing to the high prices of rare metals.


A properly maintained straight razor lasts forever, and thus does not have a severe environmental impact.


The ‘cool’ factor


Using a straight razor makes you more macho. Imaging talking with your friends when the subject strays to shaving…


You can casually remark that you use a straight razor. Immediately you have this macho aura around you. Women will love you for being macho, or if not, for being environmentally aware.


Zen


For want of a better word, this is the word I use for clearing the mind.


I am a geek. Whenever I am doing nothing in particular, my mind schedules a low priority thread that occupies my mind with various interesting problems that I encounter in my job or when writing articles.


There are very few times my mind is free.


Shaving is one of those moments if I shave with a straight razor.


Imagine scraping 8 cm of steel across you adam’s apple, your jugular vein or your upper lip. The steel is as sharp as a surgeon’s scalpel.


This tends to occupy the mind fully. You have to be careful and slow. Once you get some practice, you hardly nick yourself at all.


After a shave with a straight razor, your mind will be relaxed and empty.


When I was in college I always used to shave like this before an exam, even if it wasn’t really necessary. The whole shaving ritual is like free-form meditation.


Stuff my lawyer would make me say if I had one


Shaving with a straight razor is something that you have to do carefully. For proper technique, consult with a barber, your grandfather or someone else who has some experience.


If there is one thing to remember: NEVER ever move the blade lengthwise across your skin if you don’t want to see what’s underneath it.


Do it properly and you will be smoother than ever. Do it poorly and you’ll nick yourself plenty. Do it wrong and you’ll end up in ER or worse.


By now you should have enough sense not to just scrape a razor across your skin without care or counsel, so any accidents are not my fault.


In summary


Only with steel can you get a smooth shave.


Electric shavers are easy, but in the long run they cost a lot more, they are much more polluting, and the fact that you use them makes you as interesting as the fact that you clip your nails. In other words: NOT.


Disposables are bad for the reasons outlined above.


Trust me. Once you get into the habit of using a straight razor, you’ll never want to go back.