Sometimes it sucks not to be mainstream TV cattle

I watch TV very rarely. Usually I only follow it with half an eye while I am doing something on my laptop or reading a book. The only time I really sit down to watch TV is on sunday night for the weekly episodes of several series (Las Vegas, NCIS, Justice).

Now this happens to me fairly often: I find a TV series that I like and at the end of the first season I try to find out when the next season will start, only to discover that a) other people thought it stank so it was cancelled by the producer, or b) my local network thought it stank so they either cancelled it completely, or moved it to a 1 AM timeslot.

I just found out that the TV series ‘Justice’ was cancelled forever.

Another legal TV series ‘Shark’ was moved to air at 23:00 by the local network.

‘Boston Legal’ was discontinued by the local networks after the first season, transferred to another network, dropped, picked up again, moved to a 23:20 and now it has been a while since I saw it.

They did the same to ‘Las Vegas’ but fortunately that seems to have recovered a favorable status with the network power that be.

One things I really liked about Justice is that a case would be tried before the court and either lost or won, but you did not know what really happened, so you could base your opinion solely on the court case and witness statements. But at the very end of the show, you’d see the event as it had ‘really’ happened.

Sometimes a guilty man was found innocent, sometimes the reverse, and usually it was correct. The fact that this wasn’t always the case made it interesting imo. And the cast was good as well.

It seems that very few people appreciate a TV series that contains dry wit, mockery of human nature and the need to think about the story.

Be careful when you compile a project that is not yours.

Just a word of warning for all of those people who download demo projects from the internet: be sure to review the code and all of the project settings BEFORE you hit ‘build’. Preferably build that project in a sandbox as well.

The reason is simple. All sorts of things happen when you hit build, and you have no control anymore when the build has started. Visual Studio can do the following things during a build:

  • Pre build step

  • Pre link step

  • Post build step

  • Custom build step

And these are just off the top of my head. An attacker might post a Visual studio project that does something fishy in one of those build steps, like executing a batch file to do something maliciously.

And unless you do a code review, be very weary of building a project of which the output is set to automatically register itself on a successful build. You could very well end up building the malware that wil then infect your computer until the end of time, or at least until you format your drive. That is another project setting that youc stab you in the back.

So you end up doing a full code review and verifying the build steps, and you are still open to exploit. By default, if you hit ‘debug’ or ‘run’, Visual Studio simply invokes whatever it was that you built. This is very easy to override with something different. So it is perfectly possible for an attacker to bury an exe in a subfolder and have that invoked during a debug session.

Even if you check for exe and dll files, a bogus debug session could launch a bat file that will rename a bogus mp3 file to an exe file and then invoke that exe file…

As you see, there are lots of ways in which a build or debug command can royally mess up your system, even if the code itself is perfectly clean and innocuous. And if you aer a local administrator on your own computer, the damage can be significant.

Bottom line: you should never trust project files from unknown sources. They are easy to overlook or ignore, but they can contain a whole lot of hurt.

Tech-ed Barcelona 2007: Afterthougths

I learned a lot of new things, and all in all it was worth it to be here. I have to say that the added value was less than last year.

Last year was my first tech-ed, and I had not yet heard about WxF, LINQ or any of the new stuff in C#3.0. So basically everything was new at that time.

This time I focused on Vista, and performance and native development.

I admit that I have changed my mind about Vista. Before I tech-ed I was convinced that Vista sucked. I have tried it at home, but I had the usual problem of missing drivers, failing applications and a gazillion UAC prompts.

Note: I knew that the Vista kernel had a number of significant changes, all for the better, and I admire the new driver frameworks that ship in Vista. The suckiness I refer to is about the shell / user experience.

What also contributed to the problem was the fact that I chose to run the 64 bit version of Vista, causing me some compatibility problems on recent hardware purchases. Something as stupid as the Logitech webcam which had Vista64 drivers would not work no matter what I tried.

I still think Microsoft dropped the ball on making sure that vendors provided drivers for Vista64. It is not up to them to write drivers. Perhaps they should have spent more time on 64 bit evangelism, or made 64 bit mandatory to get the windows logo earlier than they did, or maybe used an incentive.

But security wise, I see the point now. It is a simple fact (and I admit I am guilty as well) that a lot of software was written without following the Microsoft best practices like not writing to files in ‘program files’ and not writing to global registry settings.

With XP they said ‘you shouldn’t…’, but with Vista the message is ‘You shall not…’

They took the advantage of the switch to 64 bit to throw a lot of crap out of the system, which is perfectly fine since you had to change / rebuild your code anyway for native 64 bit. A lot of stuff in the win32 API was designed in a time when security was a lot easier and the TCP/IP stack was optional. A major overhaul was warranted.

I applaud Microsoft – even though I cursed at the breaking changes – for following through and also banning practices that were not tied to the technology itself. It breaks a lot of stuff, and it has definitely hurt the uptake of Vista but I think in time, Vista will be able to put a serious damper on the proliferation of both malware and badly written applications.

It is also becoming more obvious that there is a renewed focus on C++.

There were 8,5 sessions on C++ and native code development, which is more than double of last year. The sessions were all packed with several hundreds of people in most sessions, and the knowledge level of the audience was pretty high.

As I said before: I think that the .NET hype has died and people are getting a more realistic message again. .NET and C#, F# and VB are playing an important role, and are the best solution for a lot of scenarios. But there is still an important role to play for C++ and native development.

And now something completely different.

I went to the MVP booth to pick up my keychain, and while I was standing there I started talking with a university student from an Eastern European country. She was here on a Microsoft invite, along with a group of other people from the same country.

She would not have been able to attend otherwise, since the entrance fee is shockingly high compared to the average income of the people in that country. Given that I found it very expensive as well, I can see her point.

She told me that it was hard to get a Visa to leave the country because she was a girl. Apparently, the thinking in her country is that girls try to get married to the first guy they meet in the airport so that they can stay there. When she goes back she has to go to the embassy in her country to prove that she has really returned from Barcelona.

In this day and age it is easy to think that people are free to do what they want and all borders are open, but situations like this still exist even more than 15 years after the fall of the Wall in Berlin and the Iron curtain.

But she also impressed me with her hope and enthusiasm. She and her fellow students are working very hard to modernize their schools and country, and if there are more people like her, there is no doubt in my mind that it will happen. Those people have a dream, and the determination to work for it.

Tech-ed Barcelona 2007: day 5

Yesterday evening the C++ guys (and lady) got together for an informal dinner. We met in the Hilton lounge and then took a heavily graffiti’d metro into the city center. The place we went to was Cal Pinxo. It was a very good restaurant, and the only slight problem was that the staff spoke very poor English.

The food was good, and the company was great. Steve Teixeira, Ale contenti and his wife, Kate Gregory; Jochen Kalmbach; Eric Mittelette; Eric Vernie; Gilles Guimard, Raffaele Rialdi and a couple of French and Italian guys whose names I can’t remember.

The evening zoomed past. We even managed to speak about other topics than C++ for at least part of the time. Of course one great thing about this group of people is that C++ could be the topic of casual conversation as well J

I skipped the first session this morning. There was nothing really interesting on the program, and I decided to pack my luggage and check out a bit later. This is similar to last year. I suspect that they put all the good stuff on the first couple of days, and kept all the less spectacular stuff for Friday because a lot of people are leaving throughout the day.

I arrived at 10 AM in the exposition hall and had a long chat with Steve Teixeira and a C# MVP whose name I can’t remember.

TLA405: Parallel and Asynchronous functions programming on .NET with F#

This session is hosted by Don Syme.

It was about the new F# language that ‘escaped’ from the Microsoft research labs. F# is a .Net language like C# with strict typing, but it is a lot easier to use in a functional way. It uses type inference to ensure strict typing, and supports anonymous functions and classes.

The main keyword in F# is let.

For example:

let data = (1,2,3)

data is a class with 3 int data members which are initialized to 1, 2 and 3.

Everything is still statically typed, but everything is inferred from its context so it is very easy for non programmers to pick up. In C# you have to know about delegates, asynchronous callbacks, … in order to do something usefull. F# is targeted mostly towards domain experts and math usage at this point.

There were some more demos, and real world uses of F#. One of them was the way in which the ad-targeting software of live search works.

Ad targeting depends on a lot of variables, like IP address and search query history.

The algorithms for training the targeting software was done through data analysis using F#, where 6 TB of data was crunched on a single machine in 2 weeks time, which is equivalent to 1 trainings record per ~150 us.

There are plans to add F# to the list of languages in Visual Studio with the next release of VS after VS 2008.

F# looks interesting, and if you are a domain expert it is something that will allow you to create high performance algorithms without having to be an experienced software developer.

SEC303: New cryptography: algorithms, APIs and architecture

This session was delivered by Rafal Lukawiecki.

The topic was the new suite of crypto algorithms in Vista, and what you can do with them.

This session was basically a more thorough highlight of the same content that was also shown in his first talk on day one (of which I have already written a report).

Microsoft is serious about security, and wants you to have the correct tools to generate secure applications. Vista uses those same algorithms internally, and nobody within Microsoft is allowed to use other algorithms without approval from the security committee.

If your application handles sensitive data in any way, consider upgrading to those new algorithms because 3DES and MD5 have been proven to be vulnerable to attack.

I did not sit this session out till the very end because I wanted to avoid the rush at the luggage storage.


It is over. I am not going to write up my afterthoughts on the plane. Instead I will sit back, read a good back (Wintersmith, by Terry Prattchet), have something to eat and ‘enjoy’ the flight home.

The afterthoughts will have to wait until Monday. I promised the weekend to be for my wife and kids.

Tech-Ed Barcelona 2007: Day 4

Yesterday evening I went to the local shopping mall to buy a present for my daughters. I had promised it, and I have to make sure that I am not going to be ‘bad daddy’ for leaving them alone for a week J

I have to admit that the conference is starting to weigh on me. As fun and valuable it is to be here, it is hard work.

TLA03-IS: Exploring the upcoming C++ standard C++0x and TR1

This session is hosted by Kate Gregory and Ale Contenti.

It is an open discussion where they will give a short overview of the upcoming release of TR1 and the more distant release of C++0x.

Originally, the name C++0x was used to indicate that it would be formalized sometime before 2010, but Steve Teixeira already indicated that x would be hex as well, in which case the name is still correct even if the standard gets approved in 2015.

Currently there are no plans to release TR1 for VC2005, even though it would work with current technology. C++ 10 will already include some C++0x features that are easy and don’t change anymore, and C++11 will then probably include the rest.

Concepts are a welcome new addition to template programming. They have the same goal as constraints in the .Net generic types: to insure that template arguments implement certain methods / operators. The syntax uses ‘concept’ and ‘requires’. The big difference with constraints however, is that everything is checked at compile time instead of runtime. So template code execution is just as blazing fast as before, but now there will be easy to understand compile time errors instead of the 50 line type name errors that all of us have learned to love.

The C++ language itself will get the notion of concurrency and threading primitives to declaratively insure atomicity of memory access.

Template syntax will be extended with the … syntax that allows for variable number of template parameters. This is great for scenarios where the number can change depending on how you use it. I actually know a way to use this in my current code, but I am not getting into it here because a) that would take me too much time and b) only experienced C++ programmers (like, all 3 who read this article, one of which is myself J) would understand and care.

Rvalue references are another addition. They are a bit like const references, but allowing you to change the parameter which could be a temporary value. Basically, with revalue references your function indicates that you don’t do anything usefull with the original value after the function was executed. This allows you to implement move semantics with compiler verification of what you are doing.

A great addition is that the ‘auto’ keyword finally gets a good use. It will do the same thing as the ‘var’ keyword in C#.

For example, if you have a variable like this
std::map<std::string, std::wstring> myVar;
and you want to have a local iterator, you can now declare it like this:
auto iter = myVar.begin();
instead of doing this:
std::map<std::string, std::wstring>::iterator iter = myVar.begin()

these are the little things that can save you a lot of time.

Decltype is a keyword that returns –at compile time- the return type of an operation. This seems silly, but you can use it to use templatize the return type of a function based on what it does.

For example, the result of short + short is another short, but for short + float it is float. Or even for short += short it will be int.

These things are notoriously hard to do with current template technology.

And then something I have wanted for a long time: lambda functions, aka anonymous functions. A lot of algorithms in the STL require you to define the beginning and the end of a data range, as well as pass in a function pointer for defining what the algorithm has to do precisely. A good example of this is the std::for_each() construct.

Currently you have to create a separate function for e.g. doing something like this:

void DoStuff(int i)
  cout << i << endl;
void foo(void)
  Std::vector<int> v;
  std::for_each(v.begin(), v.end(), DoStuff)

Now, the syntax for lambdas is not yet finalized so it might change, but it would be something like this:
void foo(void)
  Std::vector<int> v;
  std::for_each(v.begin(), v.end(), <>( int i, cout << i << endl));

It might seem unimpressive, but when you consider that you’d have dozens or more of these DoStuff functions in your project, it becomes a really nice feature to just not having to care about declaring and implementing these 1 or 2 line functions.

Finally, if you want to know more about TR1 and C++0x, this is the place to be:

this is the site where the C++0x committee reports and proposals are published.

TLA409: Empowering developers: x86 and x64 Performance considerations when using Visual Studio 2008

This session was hosted by Robin Maffeo.

It started off with the same boring stuff that was in the previous AMD session (hardware changes and pictures of dies.

And then it got more boring.

I had expected to attend a session about things you have to watch out for when coding for x64, and I had expected it to be an advanced session (given that it was level 400). You know, I expect to learn something despite being an experienced programmer.

Instead, there were some general explanations of the different .NET garbage collectors and threading stuff. Like: use locks to solve concurrency, and lock as little as possible… DUH. This is a level 400 session. If you didn’t know that in advance you had no business coming to this session. Even for a level 300 session this would have been basic stuff.

The fact that a lot of people left the session halfway through proves that I was not the only one who thought this.

Then there was a short list (no demo) of the same C++ compiler things that were mentioned in the previous session.

At least this session gave me enough time to write up my report of the 9 AM C++ talk.

And Mr. Mafeo: if you insist on making fun of the competition with the claim that your CPU is a ‘real’ quad core while ‘the other one’ is only 2 dual cores glued together, you should also mention that said ‘pretend quad core’ has been handing your ‘real quad core’ its virtual ass on a plate for a long time, performance wise.

Nuff said.

INF308: Top 10 mistakes developers make

This session is hosted by David Aiken.

Originally I was going to Win312: Vista for managed developers: Besides .NET3.x but when I read the description, I discovered that the topics were the same that Kate had already covered in her talk about C++/CLI and Vista : a natural fit.

And in that talk I learned that the features covered in WIN312 are much more easily accessed from C++/CLI so I decided to skip it.

Use the right tool for the job. If I have to remove a splinter, I’d rather use a razor sharp scalpel than a spoon. Just because the spoon is safer does not mean that it is a better tool. Conversely, I would not eat soup with a scalpel. And – speaking as a moderator of the straightrazorplace forum – I can state with some confidence that my razors are sharp.

The idea about this session is that there are 10 things to do which are very easy, to make the life of an ITPro more bearable.

Why do we care about ITPros.?

Well, we don’t, but they are our biggest customer, and if they complain we have to stop our interesting work and fix their boring problem before we can back to our real work.

  1. Install.

    1. I don’t care if it works on your machine, we’re not shipping your machine. So mistake 1: Ad hoc configuration or making installation changes manually.Make a script so that everything is repeatable.

    2. Mistake 2: Don’t make assumptions about security. File io, firewall settings, creating perf counters. Run and test applications with standard user rights.

    3. Mistake 3: XCOPY install. XCOPY deploument really means XCOPY and then add registry entries, enable file permissions, create databases, create user accounts, open firewall settings, ..

    4. Mistake4: Uninstall = Format c:\ always create and test an uninstaller.

    5. Mistake 5: It is not good if an upgrade means a reinstall or worse. Create patch installers.

    6. Mistake 6: your patch breaks everything else. Make sure you unittest your changes so that your patch has no side effects.

    7. Mistake 7: your app comes without admin tools. Do provide admin tools. Instrument your app with WMI because ITPros can use that to configure the worls and its dog across a network already, and you get all the power for free. Also if your app needs to be configured, provide a powershell commandlet that does all these things. They can be used from MMC so you get that for fre as well.

  2. Health.

    1. Mistake 8: Your app is dead and nobody knows why. Create a health model for your app that provides useful information to the outside world. The upcoming health model will make this a lot easier to do.

    2. Mistake 9: Provide a way for admins to test if your app works OK and use synthetic transactions. This means have a way to tag an operation as a test but let it be handled normally. The other end then knows that it was a test and can ignore it.

    3. Mistake 10: Test your app in multiple scenarios. Apparently the feedback system at a previous PDC sucked so bad it had to be rewritten overnight. If more than a couple people used it at the same time it literally crashed the network.

This session was very interesting session. Even though my software will never run in ITPro environments, the points remain valid.

TLA407: Dealing with Concurrency and Multi-Core CPUs with today’s development technologies

This session is hosted by Joe Duffy.

The talk is about the different performance and responsiveness issues with parallel programming.

The first half of the talk was about showing what it means to use threads or a thread pool or a background worker.

Despite the fact that it was a good talk I found it all very basic, but I have been writing multithreaded apps since a long time. This was new stuff for a lot of people, and there are still lots of misconceptions about threading and CPUs.

For example, one person asked the following question: ‘If I have a 64 bit quad core system, can it run 8 32 bit threads at the same time’.

No question is stupid and it is not an unreasonable question to ask if you are a novice, but it shows that if you are experienced at parallel programming, you should know that a lot of people don’t.

There was a good demo in C# to show some parallel task for calculating a Fibonacci sequence.

The second half of the talk was about locking, and what you should do if you use locking. Monitor and lock were explained, as was the new ReaderWriterLockSlim. Kernel objects were mentioned as being less efficient, but useful for interop.

Other mechanisms mentioned were Monitor. Wait, Pulse, PulseAll and EventWaitHandle.

There was also mention of memory reordering (which the CLR prevents), the fact that native pointer sized reads and writes are always atomic if they are properly aligned.

I probably should have attended the session on new features in PowerShell 2.0, but that was a level 200 session and I should be able to flip through the powerpoint slides and get the gist of it.

INF307: Windows Server 2008 for developers: transactional NTFS

This session is hosted by Jason Olson.

The other sessions do not look that appealing to me, and I know nothing about transactional NTFS, so there is a good chance I’ll learn something new.

Jason is a great guy and he is currently doing evangelism for Windows Server 2008. He also has a development background and once Windows Server 2008 is kicked out of the door into the cold hard world of IT, he will be doing stuff in the parallel programming experience for .NET.

The meat of the session is: What is TNTFS (or TxF as it is called), Whys should I use it, and How do I do that.

If an NTFS transaction is started, you can create files via that transaction, and they will remain invisible to the entire system until they are committed. A rollback instead would remove those files and the system would never know that they even existed.

This session was all code and demo and no powerpoint. It was very interesting, but I could not take much notes and pay attention at the same time.

What was very interesting is that most of the code was in C++/CLI, and for a very god reason. All of the transaction stuff is in COM, and only after you set up the COM stuff can you do transaction File IO.

These things are pretty ugly if you have to do them in C# or VB.NET, so C++/CLI is a natural fit in this situation where you can do everything like He intended use to.

Not all hpe is lost for the managed programmers though. There are community projects underway to provide managed wrappers for the new Vista functionality. One of these projects is ‘Vista Bridge’ which bridges the new Vista APIs to the managed worlds.

Btw, Vista and Windows 2008 server are sharing the same code base so these wrappers will work on both.

There still a couple of teething problems with transactional NTFS for Vista but these should be solved in Vista SP1 or Windows 2008 server.

Also interesting to note is that more than half of the people in the audience were C++ programmers. (I’d say 60% or so).


You can hardly call me unbiased, but I think C++ has a great future ahead of itself. A Lot of work is going on to make C++ easier to learn, and easier to use productively (like adding regex, shared_ptr and notions of concurrency.

It will take several more years before we are there, but the most important thing is that all of this is happening.

I also managed to pick up a couple more free t-shirts, one of which simply says ‘geek’ in white on black. To get it I had to prove at the learning booth that I am an MCP (Microsoft Certified Profesional).

However, I have long lost my MCP card, and the web app that would check my MCP status based on my hotmail address crashed halfway through.

The lovely looking hostess then called a nice guy who decided to use the ‘Challenge based approach’ to verify my status. I had to list my certs, he chose one and then asked me : ‘Ok so what was on the exam’.

‘Uhhh’ since that was several years ago I had to think for a second (Windows 2000 Professional) and then was able to fire off enough exam content to convince him.

Tech-Ed barcelona 2007: day 3

My night has been less than ideal. The hotel maintenance crew did some work on the airco system in the hallway yesterday, and they did not adjust the flow valves correctly. So every time the aico kicked in, there was the awful sound of refrigerant cavitation.

I first called at 20:30 to the reception, asking if they could send someone to fix it. They were going to do that.

Of course, when the airco in the hallway turned itself off, the noise was gone and I forgot about it until 22:30.

I called again and told them I wanted to go to sleep, so they should do something quickly. They were going to do that again.

At 23:00 I had still seen nobody, and I was getting pissed off because I was really tired and I couldn’t go to sleep.

I called the reception again, and the clerk told me that they had called the maintenance guys, but noone had responded yet.

He then had the brilliant suggestion to leave a note at the reception for the maintenance crew the next day. Again I told him that that wasn’t going to do me any good, and he should do something NOW (I wasn’t shouting. Just). He must have gotten my point because promised to have someone check it out in 2 minutes.

It took 6 (I already was tying my shoelaces for going to the reception in person) but then a friendly non english speaking maintenance guy appeared. He looked at the airco grating, said ‘ah, si’ and left my room.

He returned with a ladder, and opened up the ceiling of the bathroom. Apparently the piping for the main airco runs through my ceiling. He closed a couple of valves, and if by magic: ‘ . . . ‘ blessed silence.

By then it was already 23:30 and so my night was a bit short. But with enough coffee and bacon this morning, I was good to go.

TLA313: Microsoft Visual C++ and windows Vista: a natural fit

This was another great session hosted by Kate Gregory.

When it comes to talking with the OS, C++ is the most versatile language because it has native support for all of the following:

  • C style functions exported from a DLL, using structs and callback functions

  • Consume COM, with or without a typelib or primary interop assembly.

  • Implement a COM interface (so that you can be consumed by the system).

  • Register for callback notifications on system events.

  • Call managed methods or use delegates.

This is important, because Vista comes with a wealth of APIs. Some of them are real .NET APIs, and some of the are downright .NET hostile.

Trivial: .NET direct

WCF, WPF, WF, … all of these are .NET APIs that have no native equivalent. Which doesn’t matter with C++/CLI


Callable wrappers. These are native APIs with a .NET wrapper.


PInvoke signatures


Raw win32 APIs like for power management and vista wizards


COM, like the search and organize APIs

.NET hostile

Common file dialog, network awareness API.

Anything above the line is more or less easy to use. Anything below the line is either hard, or an exercise in S&M.

Kate then had some nice demos that demonstrated that some things are equally hard / easy to do in C# and C++/CLI (like restart recover) and some things are only possible in C++/CLI (like the network awareness functionality).

The reason for this is that network awareness uses COM connection points and other stuff. The reason is simple: performance. If you want the OS to be fast, you have to be prepared to give up nicety at some point. It is perfectly possible (and probable) that someone will wrap this up in an assembly for .NET languages to use, but inside it will still be implemented using C++/CLI.

And finally, there is another good reason why some stuff HAS to be raw COM.

For example, take explorer plugins. An explorer plugin can be provided to e.g. show metadata for a file.

Then suppose a managed application –running on .NET2.0 – opens a browse dialog which uses windows explorer. That browse dialog will also load the explorer plug-in. It is a simple .NET law that a process can only load 1 version of the .NET framework.

I don’t know what exactly would happen if that plug-in was built on .NET 3.0. But it wouldn’t work. The only way explorer can be guaranteed to work is if all of its components are native code.

TLA329: writing maintainable and robust applications with VS 2008 and Team Edition for software devs

This session is hosted by Marc Popin-Paine and Conor Morrison.

I have the Team Suite in my subscription, but never really did much with it so I thought this was a good opportunity to get up to speed.

There are several exciting features with this edition of VS:

  • Code analysis: this is a static analysis of the code that can detect a lot of things that are not picked up by the compiler by default. The technology comes from PreFast, which DDK guys have been using for years. It works for both managed and native code. If you can be bothered to download the DDK and do some manual work, you should be able to use prefast on your code without Team Edition, though it will not be integratedin the IDE of course.

  • Code metrics: this is a way to measure the health of your code by a number of variables. It calculates / measures the following things:

    • Dependencies between types.

    • Depth of class inheritance.

    • Number of executable lines of code.

    • Cyclometric complexity. This has to do with the number of different possible code paths in your functions (branching and nesting of if statements etc.

  • Maintainability index. This is a formula on the previous measured metrics which says something about the quality of the code. Analysis of the windows code base and other code bases has shown that there is a direct correlation between the maintainabbiliy index and number of bugs in a piece of code.

  • Profiling: code can very easily be instrumented or sampled at runtime to measure the time spent in each function to diagnose performance problems. This is truly a neat feature and I already know in which application I am going to try this at home.

  • Unit testing: this has been available since a long time, but it has been made easier, and it also has been pushed into the professional releases of Visual Studio.

  • Code coverage with unit testing. This is quite neat. After a unit test, VS knows how much of your code has been executed during the test, and what’s even neater is that you can see visually in Visual studio which code that is, with different background colors for executed and non-executed code. Statistically speaking, as soon as you have > 70% coverage in your unit test, you can start to rely on the quality of the code.

The bad news of coarse –to me as a C++ programmer- is that some of these goodies are only for .NET. Unit testing and code metrics are only available for managed code.

Code coverage is available, but only from the command line.

It is of coarse a lot easier to implement these for managed code (because of all the meta data and reflection features) but it is still a pity that I won’t be able to get code metrics for my template classes.

SEC403: UAC: how it works and how it affects your code

This session was hosted by Chris Corio.

I was really on the fence about this one. I also wanted to go to ‘TLA301: advanced version control with TFS’ by Brian Randall. I know that Brian is a great speaker, and version control has become of interest to me lately.

Still, I chose UAC because I wanted to know more about it, and I can always view the presentation on TFS online later.

UAC is meant to push you to developing apps that don’t need to run as admin. A lot of apps only write to program files for example because the developers couldn’t be bothered to do anything else.

Either you make your app Vista aware with a manifest (and change your code if necessary) or you do nothing and it will run virtualized. However, this will be possible for a limited time only because virtualization only kicks in if your app:

  • Is not a 64 bit app.

  • Does not have a manifest

  • Runs as administrator.

Since 64 bit will become more common, and users will less frequently run as administrator, leaving a Vista unaware as-is is not a long term option.

So what happens when you log on as an admin?

The Local Security Authentication service verifies your credentials, and then creates a token with administrator token. The elevated privileges are then stripped from the token and your logon lesion gets a filtered token instead.

If you start a program that requires no elevation, it will run the same as for a standard user. If that app needs admin privileges, it will see that there is a real admin token available, and prompt you to confirm that it is OK to do so.

If an application needs to do anything that windows deems to be for admins only, it will fail to do so unless the application was elevated when it was created. It came as a surprise to me, but a process can only be elevated when it is created. This means that if you want your app to start without the annoying dialog and still have it do something privileged as an optional thing, there is only one thing you can do.

You have to put that ‘something’ in a separate executable and launch it via ShellExecute. It should also be possible to implement that ‘something’ as an out of proc COM server and launch it. Chris even mentioned it. But I have it on good authority that that is more along the lines of ‘Slaughter a goat, wait for the right constellation to align, make sure that a bunch of highly complex stuff is in the registry and then it might work’. So I am not counting that as a viable option atm.

Btw, the only way to launch something and trigger elevation is to use ShellExecute. CreateProcess doesn’t does that because it only uses the current token to launch an application. Trying to use it will cause a simple ‘permission denied’ error.

ShellExecute uses CreateProcess internally, figures out that the problem is elevation, triggers the UAC dialog to come on, which plays musical chairs with the admin and filtered token, and then launches the new process with the real admin token.

A quick word on virtualization.

If your app runs without a manifest, it will run in a virtual filesystem and registry. This is implemented in a file system filter driver which does all the redirecting.

The redirection is based on ‘copy on write’, so an app will access the real file in program files until it tries to modify it. At that time a copy is made, and the app will forever see that copy in the user local store.

Another bit of trivia: if your application needs to modify a global file so that is affects all users of the app, put it under ‘All users’ which is perfectly legit.

Finally, Vista also separates elevated processes from non-elevated processes. So you won’t be sending windows messages or opening process handles in an elevated process.

There are probably more things that were not mentioned, but I bought ‘Writing secure code for Windows Vista’ by Michal Howard and David LeBlanc in order to learn the finer points about Vista UAC.

I never thought I’d say it, but Vista UAC is starting to make sense, even though there is a significant amount of teething problems still to work out. SP1 should make life in UAC land bearable.

INF302: Building manageable applications end to end

This session is hosted by David Aiken.

I will start off with a confession. I know this will be quite shocking to some of my peers, but I consciously decided not to attend Ale Contendi’s talk ‘TLA404: MFC updates for VC 2008 and beyond’.

The reason is simple. I don’t like MFC.

Not that MFC is not a powerful technology –because it is – and not because it is slow – because it isn’t, but something about MFC makes me go ‘Ehhhw’.

Maybe it is the fact that an MFC app looks cobbled together with a lot of macros, or maybe it is the fact that the class hierarchy is very, very deep, or maybe it is because a lot of it feels like stuff was just glued on and then riveted in place to make it stick.

MFC is a good solution for the problem it has to solve. So are garbage bins and lawn mowers. I just don’t like them.

Now to the topic of the current session.

To make an app manageable there are several things to do.

  • Your app needs to expose health information and performance data so that IT Pros can better diagnose problems, and so that you don’t get called at 3 A.M because your app died and nobody has a clue what’s going on.

  • Deployment should be seamless. In the words of David : ‘Whoever thought of XCOPY deployment should be shot and buried’. The problem is that xcopy sounds like a great idea because it is simple. In reality, a complex app needs stuff in the registry, in the GAC, needs to register event ID message DLLs, …
    Doing al of that stuff manually is very tedious, and in the case of an uninstall, a lot of crap will be left behind.

  • You should provide an administration and configuration tool for your app. XML sounds nice in theory, if you are the kind of person who can edit a 1000 line XML file and get all the bracketing right.

Microsoft is developing a new framework for Health modeling, of which a CTP can be expected in somewhere in January. There was a code demo but that didn’t quite work. The impression I got was that it would be fairly easy to instrument your code with event logging and performance monitors. It would also enable you to compile a management pack for Operations Center.

There would also be support for WMI in order for you to allow administrators to poke and prod at your application in the standard way they can use for all of their poking and proding at system components.

Microsoft have finally discovered that it would be a neat idea to enable your app to use group policy to override local configuration values instead of forcing admins to runs scripts on all computers to change your local XML file. Support for that is coming as well.

And finally, create your administration tool as a PowerShell cmdlet (commandlet).

These are .NET components that can be accessed on the powershell command line. It is trivial to slap a UI on a cmdlet if you want, but they are supported by default in the new management console. This means your app can be managed by an IT pro in the mmc they know and love.

TLA401: Debugging and crash dump analysis with VC++ 2008

Finally… the most anticipated session of today. For me, at least.

Debugging in C++ is a really interesting topic. Steve has been very elusive this week, was failed to show up at any of the C++ sessions so far. However, short of divine intervention there is no way he won’t be here AND host this session at the same time J.

Steve is one of the best speakers around, and he manages to insert a fair amount of humour in his sessions. If he is hosting a session he also talks the same way he would in a one on one conversation, so the atmosphere is relaxed and laid back. It may appear effortless, but I know that a tremendous amount of preparation is required in order to pull this off.

The focus of Visual C++ is and will remain native code development. VC++ is the only Microsoft tool that compiles to native code, and they want to make it as easy to use and as good as possible.

The secondary focus is to provide a great interop experience for interaction between native and managed code.

The compiler has some things that can help you prevent bugs from occurring. Some of them are things you normally only do in debug builds, like including crtdbg.h to detect memory leaks, and compiling with /RTC enabled to defect buffer overruns.

Things you can do both in debug and release builds are using the /GS and /SAFESH switches to mitigate the effects of buffer overruns before it is too late, and using the secure CRT and checked iterators.

These have become the norm within Microsoft, and they are used in all codebases.

It is also possible to analyse your code statically with the /ANALYZE switch (which is only available in the Team Edition of VS) You can further add SAL annotations to your code to add meta data to functions that help the analyzer determine if those functions are used correctly.

And then something that is common sense: use smart wrappers to encapsulate resources (which is also known as RAII programming).

In order to make your code debuggable, use the ENSURE macro when possible instead of ASSERT. ENSURE behaves just the same, except it throws an exception in release builds.

Another few tips for making debuggable builds:

  • Archive all symbol files for builds that are shipped. That way you have a much better chance of doing something useful with a crash dump. Technically speaking, you can also obtain those by rebuilding the correct version of your product, but in some cases that could be problematic (e.g. SQL, Windows, VC++ itself, …)

  • Use property sheets to configure project settings. These property sheets can be independent of a project itself. This way they can be reused among other projects.

  • Do automated testing on both debug and release builds. This way you find more errors because some problems only show up in release or debug builds.

The next part of the session was about the neat things you can do with breakpoints, like put conditions on breakpoints, or even use data breakpoints to break when a variable changes.

Trace points also make it possible to do something when execution hits such a point. The example that was shown was to jump past an erroneous decrement operation without having to stop or change the code. A side effect was that it seemed to make execution much slower though.

The STL finally has useful visualizers that e.g allow you to view vectors as arrays. I think these are supposed to work for lists and maps as well. In any case, you can finally view STL variables as Our Creator intended when he made the universe.

It is also possible to add your own visualizers if you edit autoexp.dat. This is fully documented in the file itself, though quite complex so you have to take your time to do it.

Edit and continue does not yet work in 64 bit mode, or for managed or mixed mode code. This is on the todo list but requires the cooperation of quite a few teams so that will not be for the release of VC2008.

Something that can be useful in multithreaded apps is the freezing and thawing of threads in the threads window.

Steve spent so much time showing all the debugging goodies that the session was almost over when he came to crash dump analysis so that was kind of a whirlwind demo.

Basically, he showed how to open a crash dump with WinDbg and find the cause of a crash, though the demo was very light on details. His main point was that if you ship software, you should register with winqual so that Microsoft can gather the crash dumps and send them to you.

You can no longer provide your own last chance crash handlers because this was something that hackers were abusing. They could install a crash handler, crash your program and suddenly they had the complete running state of an application with all its data.

This was a very good talk –the best so far IMO.

After the session I met with Rafael who is an Italian security MVP, but his passion is C++ so that redeems him 😉 I had seen him in other C++ talks and we have even been emailing since the beginning of this week regarding an informal C++ dinner, but I never met him so I didn’t knew who he was. Go figure.


Another day with 3 C++ talks. Kate told me that there are more C++ talks in Europe than at similar events in the US.

One thing I can say with certainty: I am going to put more effort into making my apps Vista aware.

I have tried to use Vista before, but I went back to using XP because the whole UAC crap felt so cumbersome. However, with SP1 it will be a lot better, and now that I start to understand WHY a lot of things are implemented like they are now, it is all starting to make sense.

Tech-Ed Barcelona 2007: day 2

I just found out that the session booklet only contains 2 pages for taking memos of 5 sessions. It is beyong me how they could have thought this would suffice.

In the next break I will have a look around in the exposition hall to see if I can get a notepad or some other source of paper.

DAT202: Overview of SQL Server 2008

This session is hosted by Francois Ajenstat. This is only a level 200 session, but I think it would be usefull to have an idea of the new features in SQL 2008. After all, it will be released in a couple of months time.

The session itself started a couple of minutes late because there was a technical problem with one of the controllers. I hope this isn’t a trend. Last session yesterday started with a power outage.

The first thing that is immediately obvious is that Francois is a gifted speaker. He really connects with the audience and has the kind of presence that makes it appear as if it is the most natural thing in the world to talk in front of a large audience.


  • SQL 2008 now has support for transparent encryption of table data without needing developer support, and can use external key management.

  • Every action in a database is auditable. This is very nice in regulated environments where auditing is of critical importance. You now get it integrated in the database, for free.

  • Database mirroring is made a lot simpler, and has ways to resolve errors and corrupt database pages, making everything more reliable.


  • Both data and backup archives can be compressed on the fly, resulting in significant performance increases due to decreased IO.

  • SQL 2008 comes with an integrated resource governor, allowing you to allocate resources to users, jobs, or anything with a GUID. There was a nice demo of a concurrent execution of a payroll job (critical) and a simple reporting job (less important) that were fighting over resources.
    The resource governor made it very easy to assign the payroll job to a fast resource pool that could use up to 80% CPU time, and a slow pool that could use 20% CPU time.
    The performance monitor immediately showed the behavior of the jobs in reaction to this.

  • Performance data collection and analysis has been simplified.

Policy based management:

  • A lot of the behavior in SQL 2008 can now be managed through policies. For example, who can do what, and what should table names look like, and are free form queries allowed,… all those things can be be configured through a policy, just like normal group policies.


  • SQL 2008 now has intellisense to help you write queries. This can be a real timesaver.

  • There is support for something called the entity framework. This is basically a way to map logical data from tables and stored procedures to conceptual entities, even though the data can come from different tables or data sources. This is nothing revolutionary, but it can make life easier for developers.

  • It has become very easy to expose data on the internet in various ways.

  • There are several new data types, making it easier to store unstructured data like a documents and MP3s, and spatial information like GPS coordinates.

  • There is a lot of new support for Business Intelligence stuff, like powerful integrated reporting tools, like graphs, controls, gauges, …

All in all this was a very interesting session. I am not a database expert so perhaps I missed or misunderstood some stuff, but it became clear that developing a database is going to encompass more than setting up a table structure and providing a set of stored procedures to access them.

WIN202: Introduction of the Microsoft Sync framework.

This session is hosted by Phillip Vaughn.

The sync framework is a new addition to the .NET framework. I don’t know anything about it, so I attend this introductory talk primarily to know if this is something I should care about or not.

The key idea is to provide you as a programmer with a simple means to enable your applications to use data while disconnected from the data store, and then automatically synchronize when the connection returns. Conflicts should be detected and resolved, and users should be able to concurrently collaborate on the same data.

It also increases performance because your application will work on local data which gets synchronized in the background.

The sync framework is


  • It supports conflict detection and resolution.

  • It handles connection and storage errors.

  • It handles all corner cases that are notoriously hard to solve, like: A works independently on a dataset, copies it to be, B changes it again, A changes its own copy again, both upload at the same time and halfway through the conflict resolution, the connection drops…

Flexible, because it can work with:

  • Arbitrary data stores.

  • Arbitrary protocols.

  • Arbitrary network topology (peer to peer, master slave)

And finally, it lets you be productive because:

  • Creating Offline capable apps with VS2008 is dead easy.

  • It has built in support for lots of endpoints and protocols.

  • The runtime is expandable.

There was a demo with a customer database app that synchronized data on a PDA, outlook, and Vista Contacts.

Then there was also a demo of a sample app called ‘Sync toy’ which can be used to synchronize files and folders. It is open code and it works really nice. So nice in fact that I am probably going to use it at home to spread the data from my fileserver across different disks in a ‘set it and forget it’ way to safeguard data from disk crashes.

They key to synchronization resolution is to use meta data to solve alls sorts of common problems.

The sync framework is really impressive for a first release, and I think it is really worth looking into if you develop applications with offline capabilities.

TLA323: What’s new in C++ 2008

This session is hosted by Kate Gregory.

I was hoping to see Steve Teixeira here as well, but he was missing in action.

This session is also hosted in one of the bigger rooms, and it was fairly crowded. If I had to guess, I’d say that there are about 200 or 250 people here. Last year all the C++ sessions were shoved in the smaller side rooms, but they were overflowing. Luckily, the event organizers have responded to that.

This talk primarily discussed the changes to the in which you use VC++, and the way you should make your apps work with Vista.

The first part of the talk handled UAC (User Annoying Component) and what you can do to make it less annoying. Basically, you can do 2 things:

  • Instruct the linker to insert a manifest, declaring that you run elevated, so your app triggers the confirmation dialog at startup.

  • Instruct the linker to insert a manifest, declaring that you run without elevated privileges. But if your app does something it would need elevation for, it will fail.

  • The third option is not to use a manifest, but you shouldn’t do that because your app will run in a virtualized file system with virtualized registry.

Another important issue: Visual Studio itself doesn’t need to run elevated anymore.

VC++ 2008 also comes with a class designer, which is really a class viewer which allows you to see a class diagram of the code. It is not a designer, because in a survey, all corporate programmers they asked indicated that they would still make changes in their code, not in a class designer.

The resource editor can now also work with the high res Vista icons, though you cannot edit them. The justification for this is that programmers generally don’t design high res images. That is done by graphics people, and they have their own tools for that.

There is a new compiler switch /MPn that allows you to compile files on n processors at the same time. In a project with dozens, hundreds or thousands of source files, this can make a big difference.

If your project depends on an another assembly, it used to be the case that your entire project would be recompiled when that assembly changed, because the only way VC detected change was based on timestamp. In a large project, this would trigger a full rebuild almost every time. Now VC looks only at the signatures of the public classes (the meta data). As long as that stays the same, the assembly will not be marked as changed.

And finally, VC2008 supports multi-targeting, so you can specify that your app runs on .NET2.0, 3.0 or 3.5 without needing to swap development environments like you need to do today.

This session zoomed past. I had high expectations because I saw Kate speak before, and I was not disappointed. This was a really great session.

TLA408: Multicore is here! But how do you resolve data bottlenecks in native code

This session is hosted by Michael Wall.

Despite the fact that this is a level 400 session, it is crowded. Another 200 or 250 people would be my rough guess.

The session started of pretty dry, with a lot of slides about the new AMD processor. Every slide was followed by ‘..but I am not going to talk about that’.

As soon as that was past it became a lot better, and the session started with a simple example to illustrate the difference between array based operations and linked list based operations.

The idea is that with array based operations, the memory accesses can be calculated in advance and anticipated by the processor, so a lot can be prefetched. With list based access this is not true anymore.

To solve this, you can use an array with list item pointers which can be prefetched.

The processor has something called a Translation Lookaside Buffer which stores memory page addresses. That list is limited, so your code should use its data as local as possible to keep the TLB from having to find different memory pages.

A cache line is 64 bytes long. If you need to access one byte, you will get access to the next 63 bytes almost for free. So if you can make those 63 bytes useful, that is another performance win. Split often used data (hot) and rarely used data (cold) so that caching is efficient, and use small data types where possible.

The cache itself consists of several layers, and you should avoid as many cache loads as possible. If you can avoid using variables until you really need to, you don’t disrupt the cache. You can also manually prefetch data with compiler intrinsics. _mm_prefetch can do that for you.

You can also use _mm_stream_ps, _mm_stream_ss and _mm_stream_sd to transfer data directly to RAM instead of letting it flow through the cache like you would normally do. Suppose you write data to a large array and you are not going to need it for a while. If you just write it like you would normally do, the entire cache is blasted with useless data. Using the intrinsics you can avoid this, and you also avoid having to flush the cache to RAM in the first place.

Compiling for smallest code (which might be less efficient) can sometimes yield faster execution times than optimizing for speed. The reason is that smaller code causes less cache misses. Using ‘whole program optimization’ also helps.

If your application is multithreaded, it should be made NUMA aware. NUMA means that CPUs can faster access local memory then memory that is local to another processor. If your app runs on multiple cores, you should use the available win32 apis like GetLocalProcessorInfo and SetThreadAffinityMask to make sure that your threads stay on 1 NUMA node.

And finally, 64 bit compiled code is usually faster than the same code compiled for 32 bit, for the simple reason that there are double the amount of registers available in 64 bit mode.

The applications that are slower in 64 bit mode usually because code size increases (and thus the number of cache misses) and because data size increases if your app uses a lot of pointers.

This session was interesting and contained some good information for developing performant code.

TLA302: Best practices for native – manage interop in Visual C++ 2008

This session is hosted by Kate Gregory and is about the additional STL implementation that is delivered with VS2008: STL/CLR.

C++ programmers –well, some of them – often use the STL because it is a high performance library that is very flexible as well. It also comes with a wealth of containers and algorithms.

The problem with the existing CLR is that it didn’t allow you to put managed pointers into the container classes. So you couldn’t simply have a vector of string^ because vector simply did not handle string^ correctly.

VC2008 now ships with a second implementation of the STL in a new namespace ‘cliext’ that is designed to work with CLR type. That STL has the same rules and features as the old STL, but it’s containers and algorithms are faster than any .NET equivalent.

The reason is that as opposed to generics, C++ programmers pay the piper when they hit the build button. The compiler checks all types and method accesses and basically everything else at compile time. If your code is wrong, you will get compiler errors. If it isn’t you will have fast code because all the checking has been done already, and isn’t done at runtime.

Converting from .Net collections to STL/CLR collections can be done by explicitly implementing the conversion routines, which is pretty trivial.

Additionally, you can’t pass templates across DLL boundaries. There are several technical reasons which I am not going to get into here, but those reasons are the reason that I cannot pas an STL vector directly to another assembly, even if that assembly also uses C++/CLI.

To solve this, every container implements a type ‘generic_container’ which is a .Net wrapper of the container, which can be passed across DLL boundaries so that other STL/CLR code can happily work with it.

There were a lot of code demos to show how easy it is to use if you are familiar with the STL.

At the end of the session there was also some attention for the marshaling library. This library contains template functions that allow you to marshal native types to .NET types in a very convenient was. Currently this library ‘only’ provides conversions for all string types.

But –C++ rules – since they are template functions, you can easily provide your own specializations for converting a .NTE Rect to an MFC RECT or whatever.

Again, there was a large audience here. About 200 people or so.

This session very good, with lots of praise to Kate.


Today alone there were 3 sessions that centered on Visual C++. And all 3 had great attendance. I think that is showing that after the initial .NET hype, a lot of companies are coming to their senses again, and realize that there are some good reasons why C++ exists.

There is a tremendous amount of new stuff in Vista that can only be accessed easily from the C++ side, and it is going to stay like that for a long, long time because of a little thing called reality, which has shown that a completely managed platform is not yet feasible.

C++ has a niche where it fits, and it is not going to be replaced by anything, anytime soon.

Tech-Ed Barcelona 2007: Day 1

Today I have mixed feelings about being here, because it is my oldest daughter’s first day at school. I wanted to bring her to school together with my wife, but unfortunately that was not to be. I spoke with her on the telephone this morning, and she was really happy that she could finally go to school.

I just got a text message from my wife to tell me that she didn’t cry.

Breakfast here is nice. They have all sorts of fresh and healthy stuff in the buffet, various sorts of bread and cereal, fruits, … I am sure it all tastes great, but I went with the fried bacon instead. Nothing to get me going in the morning like 2 plates of bacon, bread with honey and a cup of coffee.

I just registered for the MVP influentials boot camp, which is a private session for MVPs and community influencers. The idea here is to have discussion groups for separate topics, in which you can participate when you feel like. The content itself is covered under my NDA so can’t write anything about it.

It was basically talk about community issues, and not technical issues.


The keynote was delivered by S. Somasegar, VP of developer division, in an auditorium that was smelling of paint fumes. It quickly became obvious where that was coming from. 2 grafiti artists were making paintings on the stage. They were wearing gas masks of course.

Those paintings were not part of the keynote, nor were they refered to so it is beyond me what the point was.

Apparently there are a million VS users worldwide, with 25% of them paying for the Team System environment. There are also 17 million registered downloads of VS Express.

The keynote revolved around the new features of .NET and VS:

  • LINQ, and the new Sync framework which makes it easier to synchronize between online and offline collaboration.

  • The new .NET technologies WCF, WF and WPF

  • Silverlight and popfly. Tools for easily creating and modifying web pages and web applications.

Microsoft is also going to deliver guidance on the use of new technologies through extensive demo applications, and by providing blueprints..

The future of MSDN is to expand with a code gallery, an already existing wiki page in which everybody can add comments or remarks to expand the usefulness of the documentation, and a translation wiki. This is a pilot project to translate MSDN documentation to different languages.

Another very noteworthy item is that VS2008 and .NET 3.5 will be released in November 2007. This is very good news, since it contains a lot of features (WPF, WCF, WF, …) that I care for, and it will also contain a lot of new C++ features of which I am not yet allowed to talk. But Luckily Kate Gregory, Ale Contendi and Steve Teixeira will expand on those, after which I can also talk about them.

Then there was a demo with the new silverlight web technology, which was pretty cool. Not being a web developer I couldn’t judge the impact, but according to Tom it was all neat stuff.

Another noteworthy thing is that VS2008 is developed using VS2008 Team Foundation Server. A 1000 developers working on the same project, managing 30 million lines of code… In my book, that is an important vote of confidence in the Team System technology.

The license for Visual Studio has also changed. You are now allowed to use VS to build applications for other platforms, like linux or BSD. Apparently, this was forbidden earlier. The IDE source code is also becoming available to help you write plug-ins.

The next version of VS is codenamed ‘Rosario’ , and will focus on organizational collaboration, QA and advanced developer tools.

SEC302: Windows Vista Security for developers

This session is hosted by Rafael Lukawiecki.

My other option would have been the session on VS2008 and its new features. That would have been interesting as well, but I already saw some of it last year (with the Vs2008 beta) and Vista security is annoying me seriously, so I wanted to know some more about it.

The goal of Vista is to achieve NIST  Common Criteria Certification Compliance. This seems to be a gold standard, identifying an OS as secure.

Currently, Vista ‘seems’ to be more secure than XP, judged by the number of exploits and vulnerabilities in a given time since release. This period is not yet statistically significant, but so far it seems to be secure.

Vista has a number of features that make it more secure, and I will briefly touch on them here:

  • During boot, the system files are protected by bitlocker and TPM, ensuring that no off-line changes were made to system files.

  • Network Access Protection (NAP) allows administrators to force computers to update themselves to the policies of the network before being granted access to the corporate network. This is done by giving it an address that can only be used on a tiny subnet, just for the sake of enforcing NAP. Only if the system is up to date will it be given a network address on the corporate network.

  • Everybody is a standard user, and get dialogs for actions that require privileges.

  • IE7 has better protection against phishing and malware.

  • The restart manager can apply updates and reboot the system while the computer is locked and has applications open. After a reboot, the system and application are restored to their previous state if possible. Note that this needs explicit application support, which is currently only implemented by a couple of Microsoft apps, most notably Office 2007.

  • The service layer in Vista has been significantly hardened. Each service now has a unique SID that can be used to restrict the things it can do, and it can also be used by service programmers to define the only privileges they require to operate. Furthermore, the user account of a service is now LOCAL_SERVICE or NETWORK_SERVICE where possible, instead of LOCAL_SYSTEM as it used to be.
    The number of layers in the service infrastructure has also been increased, separating high risk functionality from low risk functionality. A lot of stuff has been thrown out to make the high risk layer more secure.

  • DLLs are now loaded randomly at one of 256 possible locations. An attacker can no longer assume system DLLs to be located at fixed addreses, reducing the possibility for compromise.

  • System components obfuscate long lived pointers that are accessed infrequently. This is another way to reduce the attack surface of those components.

  • Vista has more support for Data Execution Prevention (DEP) and Non- Execute (NX) technology, preventing attackers from writing and then executing code into a memory space.

  • .NET 3.0 has improved CAS and evidence technology for increasing security and authorization.

  • .NET 3.5 will further implement trust levels between an application and its external controls, and reflection will be made opt –in for private members.

  • .NET 4.0 will have even better security integration, but I didn’t really understand those features since they were only mentioned in passing, and it will be some time before it is released.

  • Networking wise, Vista has received a hardened TCP/IP stack that has a dual implementation for IPv4 and IPv6. If you accept IP addresses in your GUI, be sure to allow both kinds or your app will not work correctly for IPv6 networks.

  • There is an application aware outbound firewall. This means that you can restrict applications from making outbound connection. This will greatly decrease the chances of malware making an outbound connection and sending your private information to an attacker. Btw, the IP stack is not vulnerable to modern day attacks.

  • UAC (the annoying pop-up feature) can be controlled through local policies. Older applications are not UAC aware so they will either be virtualized (running in their own virtual file system and with their own virtual registry) or constantly nag you.

  • The authentication subsystem has been overhauled, and Gina was one of the casualties. For those who don’t know, Gina was the DLL that took care of authentication on windows XP and earlier. If you wanted to provide a different logon mechanism, you needed to hack Gina to shreds. And there could only be one Gina active, so that wasn’t too flexible either.
    Gina was shot in a back alley, and superseded by a new pluggable authentication subsystem in which multiple parties can provide an authentication mechanism like a retina scanner or a DNA sampler (hypothetically speaking).

  • Windows cardspace is the new claims based authentication model that can be used by web applications (or others) to authenticate you based on PKI certificates and a load of other stuff, without having to care about that stuff. It would allow single sign-on, and automatic authentication if the correct identity cards are in place. It will also allow authentication while respecting your privacy. For example a website could enforce an age limit without actually needing your date of birth.

  • Bitlocker is a technology that allows whole-disk encryption to protect your files, and can use additional items like USB dongles or password to further increase security.

  • There is a new cryptography algorithm suite available in Vista that is compliant with the NSA suite B requirements. This is to enable you to create secure applications, and provide you with a secure system, since Vista uses those algorithms internally for all crypto related stuff.

  • TPM allows you or the system to store secret information (like keys) in a way that they cannot be extracted though any other means. There was no in-depth discussion on TPM, but it relies on the availability of a TPM chip on the motherboard.

All in all this was an interesting discussion, and well worth the time. There are several more security sessions this week, and maybe I’ll attend one or more of them, but at least now I have an idea on the security infrastructure in Vista.

And I also know why it nags so much to do trivial things.

In between

I had a phone call with my oldest daughter at 17:30 between the sessions, and I was glad to hear that she had enjoyed herself on this first school day. She didn’t even want to come home from the after-school care, so my wife had to carry her out.

WIN302: .NET3.5 end to end: putting it all together

This session was hosted by Matt Winkler and David Aiken. It started with a power outage that delayed everything with 10 minutes. By the time they got the session going, it was already too late to join another session, so I left it altogether.

To summarize: they showed a demo application, using all the new .NET features, and then they TOLD you which feature they were using at a given time. ‘Now I want to see the food and order something, which is done with WCF. Now I do this, which uses that, yada yada yada…’

In other words: all talk, no code. No technical stuff was discussed, so I decided to leave.

It’s not that big of a disaster though, since there was nothing else I really wanted to see in this session slot. They kept all the good stuff for tomorrow and later.

So I used the free time to write my reports. Later today I will go to the welcome reception and check out the different booths in the exposition hall.

Welcome reception, afterthoughts

The welcome reception is a bit chaotic, though that can be expected if thousands of geeks and nerds decend on an exhibition hall where people are giving away a limited amount of free stuff.

Let’s see; I got an inflatable microphone (nice for my oldest daughter), an earth shaped stree ball (ditto), a signed copy of ‘The security development lifecycle’ by Michael Howard and Steve Lipner (It is beyond me why they were giving these away, but I didn’t argue) and a free tech-ed 2007 T-Shirt.

I drank some beer and ate some snacks, and decided to call it a day.

Tech-Ed Barcelona 2007, day 0

I just arrived at the CCIB and was able to register in less than 5 minutes. The travel itself was un-eventful.

I got picked up by a cab at home, checked in, idled around at the airport for some time and had the most boring flight ever. This is good. I don’t crave excitement when I am floating 20000 feet above solid ground, with nothing between me and a minute of free fall but a couple of millimeters of aluminium.

I also met up with Tom Heylen, a former collaegue of mine who is now making a fine career for himself, working as a consultant for Microsoft, doing international projects.

My hotel is the ‘Hotel barcelona princess’ and is right in front of the CCIB, which is nice. The CCIB is the red building on the left side.

Of course this also means that I am away from the Barcelona city center and the ‘Place de la Catalunia’ but I am not that much of a tourist, and going to a bar by myself is not a hobby of mine. However, there is a big shopping mall with lots of restaurants and little shop where I can spend some time in the evening.

Besides, this also gives me the opportunity to catch up on some sleep. My oldest daughter has not yet adapted to dayliht savings time, while my youngest has. This means that I still go to sleep at 23.30, and get woken up at 06:00 every single day.

The weather here is very nice. It must be a good 18 degrees centigrade under a cloudless sky. I walked around in the afternoon, and spent some time on walking on the beach.

I took this one sitting on one of those huge stone blocks for breaking the wave crests. Unfortunately it is sunday and most everything around here is closed.

Ah well, I ate at an italian restaurant, and spent an hour or 2 on my room, tweaking my LineReader class to improve its performance. Without anything else to do, that seemed as good a way to spend my evening as anything.