Tech-Ed barcelona 2007: day 3

My night has been less than ideal. The hotel maintenance crew did some work on the airco system in the hallway yesterday, and they did not adjust the flow valves correctly. So every time the aico kicked in, there was the awful sound of refrigerant cavitation.

I first called at 20:30 to the reception, asking if they could send someone to fix it. They were going to do that.

Of course, when the airco in the hallway turned itself off, the noise was gone and I forgot about it until 22:30.

I called again and told them I wanted to go to sleep, so they should do something quickly. They were going to do that again.

At 23:00 I had still seen nobody, and I was getting pissed off because I was really tired and I couldn’t go to sleep.

I called the reception again, and the clerk told me that they had called the maintenance guys, but noone had responded yet.

He then had the brilliant suggestion to leave a note at the reception for the maintenance crew the next day. Again I told him that that wasn’t going to do me any good, and he should do something NOW (I wasn’t shouting. Just). He must have gotten my point because promised to have someone check it out in 2 minutes.

It took 6 (I already was tying my shoelaces for going to the reception in person) but then a friendly non english speaking maintenance guy appeared. He looked at the airco grating, said ‘ah, si’ and left my room.

He returned with a ladder, and opened up the ceiling of the bathroom. Apparently the piping for the main airco runs through my ceiling. He closed a couple of valves, and if by magic: ‘ . . . ‘ blessed silence.

By then it was already 23:30 and so my night was a bit short. But with enough coffee and bacon this morning, I was good to go.

TLA313: Microsoft Visual C++ and windows Vista: a natural fit

This was another great session hosted by Kate Gregory.

When it comes to talking with the OS, C++ is the most versatile language because it has native support for all of the following:

  • C style functions exported from a DLL, using structs and callback functions

  • Consume COM, with or without a typelib or primary interop assembly.

  • Implement a COM interface (so that you can be consumed by the system).

  • Register for callback notifications on system events.

  • Call managed methods or use delegates.

This is important, because Vista comes with a wealth of APIs. Some of them are real .NET APIs, and some of the are downright .NET hostile.

Trivial: .NET direct

WCF, WPF, WF, … all of these are .NET APIs that have no native equivalent. Which doesn’t matter with C++/CLI


Callable wrappers. These are native APIs with a .NET wrapper.


PInvoke signatures


Raw win32 APIs like for power management and vista wizards


COM, like the search and organize APIs

.NET hostile

Common file dialog, network awareness API.

Anything above the line is more or less easy to use. Anything below the line is either hard, or an exercise in S&M.

Kate then had some nice demos that demonstrated that some things are equally hard / easy to do in C# and C++/CLI (like restart recover) and some things are only possible in C++/CLI (like the network awareness functionality).

The reason for this is that network awareness uses COM connection points and other stuff. The reason is simple: performance. If you want the OS to be fast, you have to be prepared to give up nicety at some point. It is perfectly possible (and probable) that someone will wrap this up in an assembly for .NET languages to use, but inside it will still be implemented using C++/CLI.

And finally, there is another good reason why some stuff HAS to be raw COM.

For example, take explorer plugins. An explorer plugin can be provided to e.g. show metadata for a file.

Then suppose a managed application –running on .NET2.0 – opens a browse dialog which uses windows explorer. That browse dialog will also load the explorer plug-in. It is a simple .NET law that a process can only load 1 version of the .NET framework.

I don’t know what exactly would happen if that plug-in was built on .NET 3.0. But it wouldn’t work. The only way explorer can be guaranteed to work is if all of its components are native code.

TLA329: writing maintainable and robust applications with VS 2008 and Team Edition for software devs

This session is hosted by Marc Popin-Paine and Conor Morrison.

I have the Team Suite in my subscription, but never really did much with it so I thought this was a good opportunity to get up to speed.

There are several exciting features with this edition of VS:

  • Code analysis: this is a static analysis of the code that can detect a lot of things that are not picked up by the compiler by default. The technology comes from PreFast, which DDK guys have been using for years. It works for both managed and native code. If you can be bothered to download the DDK and do some manual work, you should be able to use prefast on your code without Team Edition, though it will not be integratedin the IDE of course.

  • Code metrics: this is a way to measure the health of your code by a number of variables. It calculates / measures the following things:

    • Dependencies between types.

    • Depth of class inheritance.

    • Number of executable lines of code.

    • Cyclometric complexity. This has to do with the number of different possible code paths in your functions (branching and nesting of if statements etc.

  • Maintainability index. This is a formula on the previous measured metrics which says something about the quality of the code. Analysis of the windows code base and other code bases has shown that there is a direct correlation between the maintainabbiliy index and number of bugs in a piece of code.

  • Profiling: code can very easily be instrumented or sampled at runtime to measure the time spent in each function to diagnose performance problems. This is truly a neat feature and I already know in which application I am going to try this at home.

  • Unit testing: this has been available since a long time, but it has been made easier, and it also has been pushed into the professional releases of Visual Studio.

  • Code coverage with unit testing. This is quite neat. After a unit test, VS knows how much of your code has been executed during the test, and what’s even neater is that you can see visually in Visual studio which code that is, with different background colors for executed and non-executed code. Statistically speaking, as soon as you have > 70% coverage in your unit test, you can start to rely on the quality of the code.

The bad news of coarse –to me as a C++ programmer- is that some of these goodies are only for .NET. Unit testing and code metrics are only available for managed code.

Code coverage is available, but only from the command line.

It is of coarse a lot easier to implement these for managed code (because of all the meta data and reflection features) but it is still a pity that I won’t be able to get code metrics for my template classes.

SEC403: UAC: how it works and how it affects your code

This session was hosted by Chris Corio.

I was really on the fence about this one. I also wanted to go to ‘TLA301: advanced version control with TFS’ by Brian Randall. I know that Brian is a great speaker, and version control has become of interest to me lately.

Still, I chose UAC because I wanted to know more about it, and I can always view the presentation on TFS online later.

UAC is meant to push you to developing apps that don’t need to run as admin. A lot of apps only write to program files for example because the developers couldn’t be bothered to do anything else.

Either you make your app Vista aware with a manifest (and change your code if necessary) or you do nothing and it will run virtualized. However, this will be possible for a limited time only because virtualization only kicks in if your app:

  • Is not a 64 bit app.

  • Does not have a manifest

  • Runs as administrator.

Since 64 bit will become more common, and users will less frequently run as administrator, leaving a Vista unaware as-is is not a long term option.

So what happens when you log on as an admin?

The Local Security Authentication service verifies your credentials, and then creates a token with administrator token. The elevated privileges are then stripped from the token and your logon lesion gets a filtered token instead.

If you start a program that requires no elevation, it will run the same as for a standard user. If that app needs admin privileges, it will see that there is a real admin token available, and prompt you to confirm that it is OK to do so.

If an application needs to do anything that windows deems to be for admins only, it will fail to do so unless the application was elevated when it was created. It came as a surprise to me, but a process can only be elevated when it is created. This means that if you want your app to start without the annoying dialog and still have it do something privileged as an optional thing, there is only one thing you can do.

You have to put that ‘something’ in a separate executable and launch it via ShellExecute. It should also be possible to implement that ‘something’ as an out of proc COM server and launch it. Chris even mentioned it. But I have it on good authority that that is more along the lines of ‘Slaughter a goat, wait for the right constellation to align, make sure that a bunch of highly complex stuff is in the registry and then it might work’. So I am not counting that as a viable option atm.

Btw, the only way to launch something and trigger elevation is to use ShellExecute. CreateProcess doesn’t does that because it only uses the current token to launch an application. Trying to use it will cause a simple ‘permission denied’ error.

ShellExecute uses CreateProcess internally, figures out that the problem is elevation, triggers the UAC dialog to come on, which plays musical chairs with the admin and filtered token, and then launches the new process with the real admin token.

A quick word on virtualization.

If your app runs without a manifest, it will run in a virtual filesystem and registry. This is implemented in a file system filter driver which does all the redirecting.

The redirection is based on ‘copy on write’, so an app will access the real file in program files until it tries to modify it. At that time a copy is made, and the app will forever see that copy in the user local store.

Another bit of trivia: if your application needs to modify a global file so that is affects all users of the app, put it under ‘All users’ which is perfectly legit.

Finally, Vista also separates elevated processes from non-elevated processes. So you won’t be sending windows messages or opening process handles in an elevated process.

There are probably more things that were not mentioned, but I bought ‘Writing secure code for Windows Vista’ by Michal Howard and David LeBlanc in order to learn the finer points about Vista UAC.

I never thought I’d say it, but Vista UAC is starting to make sense, even though there is a significant amount of teething problems still to work out. SP1 should make life in UAC land bearable.

INF302: Building manageable applications end to end

This session is hosted by David Aiken.

I will start off with a confession. I know this will be quite shocking to some of my peers, but I consciously decided not to attend Ale Contendi’s talk ‘TLA404: MFC updates for VC 2008 and beyond’.

The reason is simple. I don’t like MFC.

Not that MFC is not a powerful technology –because it is – and not because it is slow – because it isn’t, but something about MFC makes me go ‘Ehhhw’.

Maybe it is the fact that an MFC app looks cobbled together with a lot of macros, or maybe it is the fact that the class hierarchy is very, very deep, or maybe it is because a lot of it feels like stuff was just glued on and then riveted in place to make it stick.

MFC is a good solution for the problem it has to solve. So are garbage bins and lawn mowers. I just don’t like them.

Now to the topic of the current session.

To make an app manageable there are several things to do.

  • Your app needs to expose health information and performance data so that IT Pros can better diagnose problems, and so that you don’t get called at 3 A.M because your app died and nobody has a clue what’s going on.

  • Deployment should be seamless. In the words of David : ‘Whoever thought of XCOPY deployment should be shot and buried’. The problem is that xcopy sounds like a great idea because it is simple. In reality, a complex app needs stuff in the registry, in the GAC, needs to register event ID message DLLs, …
    Doing al of that stuff manually is very tedious, and in the case of an uninstall, a lot of crap will be left behind.

  • You should provide an administration and configuration tool for your app. XML sounds nice in theory, if you are the kind of person who can edit a 1000 line XML file and get all the bracketing right.

Microsoft is developing a new framework for Health modeling, of which a CTP can be expected in somewhere in January. There was a code demo but that didn’t quite work. The impression I got was that it would be fairly easy to instrument your code with event logging and performance monitors. It would also enable you to compile a management pack for Operations Center.

There would also be support for WMI in order for you to allow administrators to poke and prod at your application in the standard way they can use for all of their poking and proding at system components.

Microsoft have finally discovered that it would be a neat idea to enable your app to use group policy to override local configuration values instead of forcing admins to runs scripts on all computers to change your local XML file. Support for that is coming as well.

And finally, create your administration tool as a PowerShell cmdlet (commandlet).

These are .NET components that can be accessed on the powershell command line. It is trivial to slap a UI on a cmdlet if you want, but they are supported by default in the new management console. This means your app can be managed by an IT pro in the mmc they know and love.

TLA401: Debugging and crash dump analysis with VC++ 2008

Finally… the most anticipated session of today. For me, at least.

Debugging in C++ is a really interesting topic. Steve has been very elusive this week, was failed to show up at any of the C++ sessions so far. However, short of divine intervention there is no way he won’t be here AND host this session at the same time J.

Steve is one of the best speakers around, and he manages to insert a fair amount of humour in his sessions. If he is hosting a session he also talks the same way he would in a one on one conversation, so the atmosphere is relaxed and laid back. It may appear effortless, but I know that a tremendous amount of preparation is required in order to pull this off.

The focus of Visual C++ is and will remain native code development. VC++ is the only Microsoft tool that compiles to native code, and they want to make it as easy to use and as good as possible.

The secondary focus is to provide a great interop experience for interaction between native and managed code.

The compiler has some things that can help you prevent bugs from occurring. Some of them are things you normally only do in debug builds, like including crtdbg.h to detect memory leaks, and compiling with /RTC enabled to defect buffer overruns.

Things you can do both in debug and release builds are using the /GS and /SAFESH switches to mitigate the effects of buffer overruns before it is too late, and using the secure CRT and checked iterators.

These have become the norm within Microsoft, and they are used in all codebases.

It is also possible to analyse your code statically with the /ANALYZE switch (which is only available in the Team Edition of VS) You can further add SAL annotations to your code to add meta data to functions that help the analyzer determine if those functions are used correctly.

And then something that is common sense: use smart wrappers to encapsulate resources (which is also known as RAII programming).

In order to make your code debuggable, use the ENSURE macro when possible instead of ASSERT. ENSURE behaves just the same, except it throws an exception in release builds.

Another few tips for making debuggable builds:

  • Archive all symbol files for builds that are shipped. That way you have a much better chance of doing something useful with a crash dump. Technically speaking, you can also obtain those by rebuilding the correct version of your product, but in some cases that could be problematic (e.g. SQL, Windows, VC++ itself, …)

  • Use property sheets to configure project settings. These property sheets can be independent of a project itself. This way they can be reused among other projects.

  • Do automated testing on both debug and release builds. This way you find more errors because some problems only show up in release or debug builds.

The next part of the session was about the neat things you can do with breakpoints, like put conditions on breakpoints, or even use data breakpoints to break when a variable changes.

Trace points also make it possible to do something when execution hits such a point. The example that was shown was to jump past an erroneous decrement operation without having to stop or change the code. A side effect was that it seemed to make execution much slower though.

The STL finally has useful visualizers that e.g allow you to view vectors as arrays. I think these are supposed to work for lists and maps as well. In any case, you can finally view STL variables as Our Creator intended when he made the universe.

It is also possible to add your own visualizers if you edit autoexp.dat. This is fully documented in the file itself, though quite complex so you have to take your time to do it.

Edit and continue does not yet work in 64 bit mode, or for managed or mixed mode code. This is on the todo list but requires the cooperation of quite a few teams so that will not be for the release of VC2008.

Something that can be useful in multithreaded apps is the freezing and thawing of threads in the threads window.

Steve spent so much time showing all the debugging goodies that the session was almost over when he came to crash dump analysis so that was kind of a whirlwind demo.

Basically, he showed how to open a crash dump with WinDbg and find the cause of a crash, though the demo was very light on details. His main point was that if you ship software, you should register with winqual so that Microsoft can gather the crash dumps and send them to you.

You can no longer provide your own last chance crash handlers because this was something that hackers were abusing. They could install a crash handler, crash your program and suddenly they had the complete running state of an application with all its data.

This was a very good talk –the best so far IMO.

After the session I met with Rafael who is an Italian security MVP, but his passion is C++ so that redeems him 😉 I had seen him in other C++ talks and we have even been emailing since the beginning of this week regarding an informal C++ dinner, but I never met him so I didn’t knew who he was. Go figure.


Another day with 3 C++ talks. Kate told me that there are more C++ talks in Europe than at similar events in the US.

One thing I can say with certainty: I am going to put more effort into making my apps Vista aware.

I have tried to use Vista before, but I went back to using XP because the whole UAC crap felt so cumbersome. However, with SP1 it will be a lot better, and now that I start to understand WHY a lot of things are implemented like they are now, it is all starting to make sense.

Leave a Reply

Your email address will not be published. Required fields are marked *