Monthly Archives: June 2012

Content Type text/xml charset=utf-8 was not supported by service

My blog has been moved to

It has been migrated to WordPress platform.
You can find this post



Back in town

I’ve decided to start blogging again, on the subject of C++. A couple of years ago, just before the release of VS2010, I had become jaded with C++. The standard was still nowhere near finalized, Visual C++ was getting none of the ‘designer’ loving.

Sure, we had C++/CLI, but only after the abomination that was Managed C++. And while C++/CLI was a decent language and indeed ‘just worked’, the only thing it was good for was writing glue code to run native code in a managed wrapper. For all other things, C# was a vastly better choice.

Fast forward a couple of years, and it is a whole new world.

The standard has finally been ratified, C++ has gotten a much needed refresher (both language and library wise), it has suddenly become hip again with Metro and the need for fast code with a small footprint, and with interesting things like PPL, AMP and ALM, there is a brave new world to be discovered. I am excited about C++ again!

I am also not typing this on my development machine or my laptop, but on the Windows 2012 Server machine that I created in the Azure cloud. It is lovely to have a performant dev machine to play with. Given the very low cost of Azure VMs, I can’t really justify buying a new development machine when the old one kicks the bucket. And that is not even considering the benefits of having access to the machine everywhere, having it patched automatically, and never having to worry about hardware problems.

Ok, I suppose noone really missed me or even knew I was gone for 3 years. I also decided to come up with a new name on my blog. The cluebatman them was getting a bit dorky. C++ programming on cloud 9 is better for now. The new C++ standard has made me happy, and I am running my dev machine in the cloud.

Still cheesy…

when I’ll come up with something better, I’ll change it.

Anyway, I’m Back!’

WinRT Async Tutorial

WinRT applications, available on both Windows 8 and Windows RT operating systems, make extensive use of asynchronous programming concepts. In fact, with WinRT, Microsoft has followed a simple rule: if an API is expected to take more than 50 milliseconds to run, the API is asynchronous. The rationale behind this decision was to make user experience fluid and not hang or block the UI while an operation is being completed.

In this tutorial, we create two simple applications that demonstrate the use of asynchronous programming using direct API calls as well as writing custom methods that are asynchronous.  We discuss the ‘Await” keyword and how it is used. We also demonstrate the concepts of an asynchronous ‘task’ and how we use tasks to implement asynchronous activities.

Installez Windows 8 CP et suivants avec Mediaforma

Patrice2012Bonjour Ă  tous,

Aujourd’hui, Mediaforma, votre formateur en ligne aux produits Microsoft Windows et bien d’autres, vous propose une installation de Windows 8 CP dans une machine virtuelle.

Traduction simple : vous ne voulez plus cette beta sur votre machine, supprimer-lĂ  tout simplement sans altĂ©rer l’installation actuelle.

A l’aide de VirtualBox, Michel Martin se propose de vous montrer comment installer Windows 8 sur votre machine.

Par ici l’installation

Mediaforma est le site web “Formations” de Michel Martin, Ă©diteur de nombreux livres informatique d’une grande majoritĂ© de produits microsoft et ce pour tous les niveaux de connaissance.

Bonne journée.

Phishing sur FREE Adsl

Bonjour tout le monde,

Les clients de l’opĂ©rateur FREE subissent de nouveau un acte d’hameçonnage (Phishing).
Cette fois, les messages proviennent du serveur en production pour des motifs peu recommandables.300x250_Accelerateur Windows 8

Bien entendu, aucune rĂ©ponse ne doit ĂȘtre donnĂ©e Ă  ce message crapuleux qui ne vise qu’Ă  vous substituer des informations concernant votre identitĂ© et/ou bancaires Ă  partir de :

Parlez-en autour de vous, votre entourage notamment les personnes moins aguerries Ă  ce genre de pratiques.

Voici la source du message :

Received: from (LHLO ( by with LMTP; Fri, 29 Jun 2012 05:22:48 +0200
Received: from ( [])
by (Postfix) with ESMTP id 7E60CAA706A
for <>; Fri, 29 Jun 2012 05:22:48 +0200 (CEST)
Received: from ([])
by (MXproxy) for;
Fri, 29 Jun 2012 05:22:48 +0200 (CEST)
X-ProXaD-SC: state=HAM score=0
Received: from (localhost [])
by (Postfix) with ESMTP id 5E35790093E
for <>; Fri, 29 Jun 2012 05:21:35 +0200 (CEST)
Received: by (Postfix, from userid 10000)
id 5B075902650; Fri, 29 Jun 2012 05:21:35 +0200 (CEST)
Subject: Freebox Votre Service abonnement
MIME-Version: 1.0
Content-type: text/html; charset=iso-8859-1
From: Votre Abonnee <>
Message-Id: <>
Date: Fri, 29 Jun 2012 05:21:35 +0200 (CEST)
X-Virus-Scanned: ClamAV using ClamSMTP


Bon weekend.

Tech-ed Amsterdam 2012: Day 5




I checked out and brought my luggage with me to the RAI. There is a luggage / cloak room where almost no one drops off their stuff, so I am using that one instead of the main one. Hopefully, it’ll save me some time when it is time to leave.

Yesterday I hung out with Steve for a while. It’s things like these that make tech-ed more than just about learning. As I mentioned earlier, it is nice to stay in touch with people across years.

DEV332: Async made simple in Windows 8, with C# and VB.NET

This session is hosted by Dustin Campbell

Async is the norm for Windows RT, where asynchronous programming is the only way to program. Synchronous programming and blocking is no longer acceptable for user applications, in order to ensure that applications are responsive and scalable.

Futures are objects representing pieces of ongoing work. They are objects in which callbacks can be registered that have to be executed when the work is ready to be completed (like doing something with downloaded data. Futures are basically syntactic sugar to make existing async programming patterns more palatable. The only downside is that you get nested lambdas for tasks that execute in several steps. Apparently, this is called macaroni code.

To fix this, C# has await and async keywords.

Await will take the rest of a method, and hooks it up as callback for the asynchronous event which is being handled. The Async keyword is used on the method itself to tell the compiler it has to do this. The callback will always appen on the sme thread that the operation was started from, so resource contention is not a problem, because while the code is running, the thread is not doing anything else.

So while your source code looks like something that is executing synchronously, it is actually broken in different pieces which are executed asynchronously. This is really neat, and it hides a lot of the ugliness of asynchronous programming. Even if you are not programming for Windows 8, this is a valuable feature for regular applications that require asynchronous programming.

Exception handling is built in, because the underlying IAsync operation captures it and presents it to the caller. Exceptions can then also bubble up through various completion tasks, and can be handled simply in the event handler like you would normally do. This is sweet, and much, much, much more convenient than if you had to deal with it manually

SIA311: Sysinternals primer: Gems

This session is hosted by Aaron Margosis. I’ve sen him present a similar talk a couple of years ago.

The room is not full. Plenty of seats are left open. I think this has to do with the fact that it is the last day. Aaron announced that there would be a book signing, but also mention that in their infinite wisdom, the organizers have decided not to have a bookstore on site. Yeah… I noticed. Someone should have his ass kicked because of it.

The entire session was demo driven, so I didn’t take notes. It was mainly about the unknown utilities or unknown features of well known utilities in the sysinternals suite.

DEV334: C++ Accelerated Massive Parallelism in Visual C++ 2012

This session is hosted by Kate Gregory, and covers the new C++ AMP tools which allow you to offload number crunching to the GPU. The room is not full, I suspect it has roughly the same group of people who were also at the pre-con sessions.

The session started with the overview of why you want C++: Control, performance, portability.

With AMP, your code is sent to an accelerator. Today, this accelerator is your GPU, but other accelerators might appear. The libraries are contained in vcredist, so you can distribute your AMP app just like any other app. And because the spec is open, everyone can implement it, extend it or add to it. Apparently, Intel have already done that.

They key to moving data to and from the GPU is a class array_view<T,N>, which represents a multi dimensional array of whetever. You populate those structures, and then perform the parallel_for_each() library function. This function will do all the heavy lifting and data copying for you. When the parall_for_each finishes, the result will be ready for you.

Some restrictions:

You an only call (other) AMP functions. All functions must be inlineable, use only amp supported types, and you won’t be doing pointer redirections or other C++ tricks. There is a list of things that are allowed and not allowed, but they are really all common sense.

There is also array<T,N>, which is nearly identical to array_view, but if you want to get data out, it has to be manually copied. At least that was my understanding. Things are going fast at this point so it is possible I’ve missed something.

If you want to take more control of your calculation, you can use tiling. Each GPU thread in a tile has a small  programmable cache, which is identified by the new keyword tile_static. This is excellent for algorithms that use the same information over and over again. There is an overload for parallel_for_each which takes a tiled extent. However, the programmer is responsible for preventing race conditions -> use a proper use pattern with tile barriers

What is particularly interesting is that Visual Studio 2012 has support for debugging and visualization. You can choose debugger type CPU breakpoints and GPU breakpoints, and you need to debug on windows 8 apparently. It just works, and this was probably a huge chunk of work for someone, somewhere in the VS debugger team J

There is also a concurrency analyzer which is really good for figuring out CPU / GPU activity and how it correlates to your code.


That’s it for today. Time to go home.

I am glad attention got called to the fact that there is no bookshop. I’ll have to put that in the official feedback as well. And speaking of silliness: this tech ed there was exactly 1 session about the new C# keywords for asynchronous programming, and one on .NET 4.5 features. And for some inexplicable reason, they got scheduled in the same timeslot. Someone dropped the ball there as well.

Tech ed was a valuable experience yet again. I’ll post an overall tech-ed wrap-up tomorrow.

Outlook Configuration Analyzer Tool 2.0

The Microsoft Outlook Configuration Analyzer Tool 2.0 was recently made available to download.

According to a post at the Microsoft Exchange Team blog, here are the new features:

  • Automatic downloading of new detection rules – As we create new rules to detect issues or to collect additional information about your Outlook profile, we will post a new updated rule file to the Internet. With the default OCAT version 2 configuration, OCAT will automatically check for a new rule file and prompt you to update OCAT if a new file is found.
  • Automatic downloading of OCAT installation files – As we update and fix the core OCAT application, we will post a new Windows Installer package file (.Msi) to the Internet. With the default OCAT version 2 configuration, OCAT will automatically check for a new installer file and prompt you to update OCAT if a new file is found.
  • Addition of the CalCheck tool – The Calendar Checking Tool (CalCheck) for Outlook is a command-line program that checks Outlook Calendars for problems. This tool is now included in OCAT to scan for and report on any known problems with items on your primary Calendar.
  • Addition of new detection rules – To greatly enhance the list of known issues detected by OCAT, approximately 75 new rules were added to OCAT version 2.
  • Improved support for Outlook 2003 – OCAT v1 supported Outlook 2003 using offline scans, however, most people did not realize it because of the error shown when you tried to run an Online scan.
  • Command-line version of OCATcmd.exe – OCAT version 2 includes a command-line version of OCAT (OCATcmd.exe) that administrators can use to scan computers in their organization. Please see the OCAT v2 Supplemental Information Download.docx file for details on how to use the command-line version of OCAT.

Brief Description
The Outlook Configuration Analyzer Tool 2.0 (OCAT ) provides a quick and easy method to analyze your Microsoft Office Outlook profile and mailbox for common configurations that may cause problems in Outlook. This can be very useful for busy Help Desk personnel when end-users call for help with Outlook or when you want to identify possible issues with Outlook proactively.

The Outlook Configuration Analyzer Tool 2.0 provides a detailed report of your current Outlook profile and mailbox. This report includes many parameters about your profile, and it highlights any known problems that are found in your profile or mailbox. For any problems that are listed in the report, you are provided a link to a Microsoft Knowledge Base (KB) article that describes a possible fix for the problem. If you are a Help Desk professional, you can also export the report to a file. Then, the report can be viewed in the Outlook Configuration Analyzer Tool on another client computer where the tool is installed. The Outlook Configuration Analyzer Tool 2.0 also includes a command-line version that can be used to collect an OCAT scan without user intervention.

System requirements
Supported operating systems: Windows 7, Windows Vista Service Pack 2, Windows XP Service Pack 3

This download works with the following Microsoft Office programs:

  • Microsoft Office Outlook 2003 (Offline Scans only)
  • Microsoft Office Outlook 2007
  • Microsoft Outlook 2010 (32-bit or 64-bit)

The following minimum version of the Microsoft .NET Framework is required:

  • Microsoft .NET Framework Version 2.0

The .NET Programmability Support feature included with Office must also be installed.

Tech-ed Amsterdam 2012: Day 4

I played some more with my Azure devbox, and I have to say the responsiveness is great. I had another phone call from the home front, and I am looking forward to going home and see my family.

The weather here is clearing up nicely, and the park I have to walk through was just lovely. Lots of trees and little lakes, with gaggles of geese and ducks and fish. At one point I had to jump aside or risk being trampled by a horde of women in spandex.

Today there are several interesting sessions I am looking forward to, including the next one.

What’s new in Active Directory in windows Server 2012

This session is hosted by Samuel Devasahayam and Ulf B. Simon-weidner

ADPrep and DCPromo are now gone. Server manager makes it seamless, and ADPrep functionality now runs automatically when the first machine is promoted. It can also be run remotely, which is a nice feature for remote installation. Validation checks were added to make sure common errors were eliminated. What is nice is that all functionality is implemented under the hood as powershell comdlets, so everything is scriptable.

If there was a network hiccup, dcpromo would fail. This has now been made more robust. Again, for me this is less relevant because I always work on a LAN, but it is important for customers with distributed networks.

Increase management experience. Recycle bin GUI.

A common problem with virtualization is that rollback to snapshot messes with Active Directory. Everything created during the rolled back time period and can be inconsistent. Therefore it becomes necessary to figure out a way of AD to know where there has been a rollback or not. This seems to be done in cooperation with the hypervisor. What was interesting is that they still say that snapshot is NOT a valid backup / recovery method, which is explained later on/

The example that was shown was of a RID pool which started reusing parts of the RID pool that had already been given out. Support has now been added to detect such cases and deal with them in a consistent manner.

Interesting new functionality is the ability to clone domain controllers. That is kind of cool. Admins can now take a DC, and clone it into multiple copies -> easy deployment of DCs to branch offices, but also to create redundant DCs. There was a flow chart, covering the various steps that were taking internally, such as discarding the RID pool to prevent some of the problems mentioned earlier. You have to be careful that 3d party software might not like being cloned. DNS, FRS, DSFR are supported.

RID improvements. Currently, it is possible to deplete the entire RID pool of a forest, after about a billion RIDs. This  seems odd. There was a sequence of events identified which could lead to this problem. The only solution was to do an entire forest migration. If you have a forst that is big enough to encounter this problem, this is probably not a happy scenario. The list of possible causes was covered. One topic I’ll have to read more about is DC Reincarnation because that is related to our backup and recovery scenario. I don’t think we have any of these problems but it pays to make sure.

There was a mention of deferred index creation, which had to do with schema changes in large enterprisey networks, so not really applicable to me. Offline domain join (join across internet) is another feature that I can see being nice for customers with a large geographic distribution. Ditto for connected accounts. This feature allows you to connect your live id to your AD account. It is not possible to log into AD  with your live id though.

LDAP Logging has been improved, with added controls and behavior that is always nice for anyone.

AD Administrative center looks nice. There is now a single tool to manage AD. That has been long overdue. One of the things it allows you to do is to enable the recycle bin. The recycle bin already existed in 2008R2, but now it is integrated in the GUI. It is not yet nested for now (meaning OU and CN structure). It is not yet a transparent ‘undo’. There is also a history viewer which shows the history of your AD transactions, with the powershell syntax visible. That is really nice, both for debugging and scripting.

Password policies are not also granular, meaning it is possible to set different policies for different types of accounts. This is another thing that was possible before with 2008, but not via the GUI.

Clustered or load balanced services that share a security principal are now supported by Managed Service Accounts. This is also nice, considering that more and more servers are clustered. Replication and topology cmdlets are now supported for managing site topology and replication in a consistent and scriptable manner.

One thing that is again very interesting is Active Directory based activation, meaning you no longer need KMS to activate your clients and servers in a volume licensed environment.

WCL289: Windows 8 demos

This session is hosted by Brad McCabe

No powerpoint. One thing that is nice is that there is no loon screen. You get immediate feedback form your apps. Loging in can be done via password, swipe, or picture manipulation. The latter is interesting especially in tablet environment where it is easy to log on by tapping a couple of landmarks and then poking your wife in the nose J

The desktop is not a sea of icons, but rather a sea of tiles that are grouped together. The tiles themselves contain live information. The apps themselves follow this tiles paradigm as well, allowing you to flip through your application. For data drive apps that makes a lot of sense. I wonder if it will be as efficient for control type applications.

One thing that is interesting is that apps actually start to look like on tv, with animations, sliding panels and transparent surface, all without a lot of effort on the part of the application developer. That is all supported and provided by the Metro libraries and Windows RT subsystem.

The same functionality exists between touch and mouse / keyboard. Touch is all about how you interact with the edges, mouse uses edges and right click. And it was mentioned yet again that devices and desktops have the same kind of user interface.

Apps following a share contract can easily exchange information, even though they don’t know anything about each other. The search contract allows users to search apps as result providers.

The traditional desktop still exists, and can be used side by side together with the new Metro style UI.

Windows 8 has a new concept called storage spaces. All devices can be pooled together in a storage pool and storage spaces. You can allocate larger sizes than physical space, and more space can be provisioned as it is needed. Not quite sure how this is more useful than the ability to use dynamic disks that can be grown. It also supports mirroring and parity.

Windows To Go was demonstrated running off a stick. This was similar to the demonstration already shown during the keynote yesterday.

Dev317: Going beyond F11, Debug Faster and Better with Visual Studio 2012

This session is hosted by Brian A. Randell.

I think I saw Brian speak on earlier tech-ed events. He is a very good speaker if memory serves me well. Very animated and able to drag the audience along for the ride. He also understands the value and use of silences.

I was first considering to go to wsv326: ‘Windows Server 2012, a Techie’s perspective’, but after looking at the summary, it seemed to overlap with the Active Directory session I saw earlier this morning. It would probably go a bit deeper on things like Kerberos and Compound identity, but that is not really something that is applicable to me. The chance of running a Windows Server 2012 environment before 2015 are slim as well.

Another candidate for this session slot was DEV314: Azure Development with Visual Studio 2012. That looked interesting, but I am unlikely to develop for the cloud or develop multi tiered applications, so not really that useful to go to.

Install SP1 for fixing bugs in unit testing and making it more performant.

Debugging: use just my code. Shows only your code in the call stack, and not other DLLs and frameworks. Otherwise the entire stack is shown. Source stepping in the .NET code is also supported with the parts of the source code and symbols that have been opened by Microsoft. This requires source stepping and source server support. Keep looking at the options menu for debugging because that is where some old and new features are still unknown.

There was an explanation of the various neat things you can do with breakpoints, like making them conditional or counted. This is rather old stuff really. I didn’t know pinnable data tips though. These are like tiny quickwatch windows which are shown over the code, and which are updated while debugging. They are visualizations of live variables.I thought that was neat. There is also a breakpoints window showing al breakpoints and various information.

Remote debugging finally works in VS2012. It existed already, but didn’t really work that well. This feature was seriously improved for Windows 8, for the purpose of remote debugging applications on ARM or other devices. It works tethered and over WiFi. You need to install remote debugging tools on the target platform.

Debug->Attach to process (machine name and process selection). From then on you can set breakpoints and break into the debugger.

For the paying versions of VS, all tools are available with the installation CD. Express users have to download  it manually. Then it has to be installed, and you have to run the remote debugging configuration wizard. This is basically to give access and configure firewall rules if there are any.

The profiler has also had improvements, making it possible to get more information about your application as it runs, even inside the simulator. Analyze ->Launch performance wizard. There are 4 profiling options that are more or les invasive. Each has its benefits and problems. CPU sampling can show hot code and call path issues. This can help you identify the most interesting places to optimize them.

Love, hold and protect your pdbs. If you distribute applications, you need to track your pdbs make sure you have all versions. With TFS, you can automate this process and link it to the version of your code that was built. For me this is not an issue, since I don’t have TFS. With the applications I distribute, I distribute the pdb files along with it.

At the end of the talk, there was a description about intellitrace. This is particularly useful if you inherit someone else’s code. It is a historical debugger, which needs Visual Studio ultimate. The output is an i-trace file, and contains the CLR profiler debugging and profiling APIs, and you can navigate up and down the stack frames with Visual Studio. Everyone can collect logs, including users and testers. You can collect intellitrace events, as well as method entry and exit.

There was a demonstration showing thee features, and it really looked nice. If you have a business model where you distribute a lot of applications to big customers, this is very worthwhile. Mind you, if you only have small customers it is nice as well, but you boss probably needs an argument why spending all that money is a good idea.

SIA302:  Malware hunting with the sysinternals tools

This session is hosted by Mark Russinovich.

My attendance here is a no brainer. Firstly because the only other interesting thing in this slot is Windows 8 Metro, and Steve will be covering that as well. Besides, it was already demonstrated this morning as part of the windows 8 demo. And secondly, if there is no really compelling counter argument, you just can’t –not –go listen to Mark Russinovich. He is to nerds and geeks all around the world, what Justin Bieber is for pre teen girls.

I am sitting outside the theatre waiting for the doors to open, because I am anticipating a big turnout for this talk. I’m also making sure m batteries are charged again.

The room is filling up completely. People are being encouraged to move to the middle of the seats because according to the woman trying to fit everyone in the room ‘People are not going to crawl over you; 98% of the people here are males!’

I didn’t take notes here, since the entire session was a demonstration of how to use process explorer to try and remove malware from a system without repaving it. It was an interesting demonstration, but not something I would attempt myself I think. At the end he also discussed Stuxnet and Flame, and how sophisticated they were. Stuxnet in particular is a bit scary. Not what it does, but how well it does it and remains hidden.

DEV367: Building Windows 8 Metro style apps in C++.

This talk is hosted by Steve Teixeira. As with Mark’s talk, I just can’t –not –go, regardless of what the other sessions would be.

Actually I am looking forward to this talk for several reasons. I am going to get back in C++ programming, and since Metro is currently the way forward, and C++ is now finally a first class citizen again, Metro is the way to go. It will be very useful to see a demo on this topic so that I can hit the ground running.

And let’s face it, if you are like me (if you are a C++ programmer, that is a strong possibility) you think that a grey dialog with a square button is an example of good UI design. Throw a listbox on that form and you’re the man. Yet if you want hip people to think your app is hip, you need your app to look like the other apps on their hip device. With metro, a lot of that work is done for you so that is good I suppose.

I rushed to this room after Mark’s talk, because Steve’s session was packed yesterday, and there is only half an hour between Mark’s session and Steve’s.

C++ supports all 3 app models: XAML, HTML/JavaScript, and DirectX. As was already mentioned before: the designer now treats C++ as a first class citizen. And Visual C++ is optimized for Metro, from project templates to Intellisense, designers, and deployment managers… the works.

After creating a standard XAML program, you can use the manifest designer to define the ways in which windows treats your program, and also the capabilities you specify your app needs, and how it gets packaged.

C++/CX is a set of language and library extensions to allow consumption and authoring of Window RT types. It is 100% native code C++. The syntax looks like C++/CLI, and uses many of the same conventions. In fact, just by looking at it you might be lulled into thinking you are looking at C++/CLI.

It is deeply integrated with the STL, which makes sense as everything is native. An important remark was to use only C++/CX at the surface area of your application, and keep the rest in ISO C++. That way your codebase remains portable, while still having a surface that is consumable by WinRT clients.

Important detail: Exceptions don’t travel across module boundaries. They get translated to an HRESULT and then rehydrated into COMException. So not only does the Exception types don’t transfer, but you also should not derive from these exception types, because the translation will not know your exception type and you will lose your hierarchical information.

Metadata of your app or dll will automatically be stored in the .winmd metadata file.

C++/CX also has partial classes. This is neat, and is what was needed to allow the IDE to work on your class and consume it without getting in your way. Your UI is completely configured via XAML,  which connects to your code.

Hybrid C++ / Javascript: high level programming where it matters. HTML project  with js, and not 1 bit oc C++. This is the UI project, not functionality behind the buttons. Then the WinRT project got added to the solution. The html project can then just consume the WinRT component.

These 2 options of programming look really similar, and you do practically the same thing. I guess the only reason to pick one over the other would be which you would be more comfortable with.

Note on deployment: the model is built on the idea that apps are distributed via the store. However, this does not really work for developers (J) or enterprises or other scenarios. To provide this functionality, you can package your application. The package comes with everything it needs to install it on a different machine, and even includes the necessary information for remote debugging.


Today was the most interesting day so far. Or rather: the day in which the sessions were varied, and all were interesting. The Active Directory stuff looks great, and in my opinion this should have gone in Win2008. My guess it was all related to lack of time, and they finally had the time to finalize and polish the things that were implemented in a rudimentary fashion in 2008.

Windows 8 looks great, and I think it will go places in the consumer market. Now if only Microsoft won’t drop the ball, and make Windows 8 devices available in the European consumer market. Previous devices like the windows phone and the Zune were a success. I think Zune wasn’t even released here.

The debugging talk was interesting, and I learned a couple of new tricks. Brian is always a good speaker to listen to.

And I am stoked about the Metro development with C++. It looks really userfriendly, and you don’t need to jump hoops like with previous designer experiences. It makes me wonder about the future of WPF btw. There is a large installed base for Windows forms, and those certainly have their place.

My impression is that they came up with WPF to implement some of the ideas that made it into Metro, but without any real underlying strategy, platform support or concept of platform diversity at te time when it was implemented.  

TMG Firewall Service beendet sich mit EventId 14057

Heute Nacht habe ich auf mehreren TMG 2010-Installationen das selbe Verhalten beobachtet und auch aus anderen Quellen BestÀtigungen bekommen:

Gegen 03:17 hat sich an einem meiner TMGs der Firewall Service beendet, mehrfach neugestartet (Restart-Option in der Dienstesteuerung) und schließlich endgĂŒltig beendet. Hier AuszĂŒge aus dem Eventlog:

Log Name: Application

Source: Microsoft Forefront TMG Firewall

Date: 28.06.2012 03:17:27

Event ID: 14057

Task Category: None

Level: Error

Keywords: Classic

User: N/A

Computer: Belinda.DOMAINNAME.TLD


The Firewall service stopped because an application filter module C:Program FilesMicrosoft Forefront Threat Management Gatewayw3filter.dll generated an exception code C0000005 in address 000000007008254F when function CompleteAsyncIO was called. To resolve this error, remove recently installed application filters and restart the service.

Event Xml:

<Event xmlns="">


<Provider Name="Microsoft Forefront TMG Firewall" />

<EventID Qualifiers="49152">14057</EventID>




<TimeCreated SystemTime="2012-06-28T01:17:27.000000000Z" />




<Security />



<Data>C:Program FilesMicrosoft Forefront Threat Management Gatewayw3filter.dll</Data>






Log Name: Application

Source: Application Error

Date: 28.06.2012 03:17:28

Event ID: 1000

Task Category: (100)

Level: Error

Keywords: Classic

User: N/A

Computer: Belinda.DOMAINNAME.TLD


Faulting application name: wspsrv.exe, version: 7.0.9193.500, time stamp:0x4e75ffd3 Faulting module name: w3filter.dll, version: 7.0.9193.500, time stamp: 0x4e7600fb Exception code: 0xc0000005 Fault offset:0x000000000005254f Faulting process id: 0xba8 Faulting application start time: 0x01cd2fb41f697ab4 Faulting application path: C:Program FilesMicrosoft Forefront Threat Management Gatewaywspsrv.exe Faulting module path: C:Program FilesMicrosoft Forefront Threat Management Gatewayw3filter.dll Report Id: 06780d81-c0bf-11e1-841a-f4ce46b67fce

Event Xml:

<Event xmlns="">


<Provider Name="Application Error" />

<EventID Qualifiers="0">1000</EventID>




<TimeCreated SystemTime="2012-06-28T01:17:28.000000000Z" />




<Security />













<Data>C:Program FilesMicrosoft Forefront Threat Management Gatewaywspsrv.exe</Data>

<Data>C:Program FilesMicrosoft Forefront Threat Management Gatewayw3filter.dll</Data>




Log Name: Application

Source: Windows Error Reporting

Date: 28.06.2012 03:17:30

Event ID: 1001

Task Category: None

Level: Information

Keywords: Classic

User: N/A

Computer: Belinda.DOMAINNAME.TLD


Fault bucket , type 0

Event Name: APPCRASH

Response: Not available

Cab Id: 0

Problem signature:

P1: wspsrv.exe

P2: 7.0.9193.500

P3: 4e75ffd3

P4: w3filter.dll

P5: 7.0.9193.500

P6: 4e7600fb

P7: c0000005

P8: 000000000005254f



Attached files:






These files may be available here:


Analysis symbol:

Rechecking for solution: 0

Report Id: 06780d81-c0bf-11e1-841a-f4ce46b67fce

Report Status: 4

Event Xml:

<Event xmlns="">


<Provider Name="Windows Error Reporting" />

<EventID Qualifiers="0">1001</EventID>




<TimeCreated SystemTime="2012-06-28T01:17:30.000000000Z" />




<Security />







<Data>Not available</Data>




























Log Name: Application

Source: Microsoft Forefront TMG Firewall

Date: 28.06.2012 03:19:00

Event ID: 14003

Task Category: None

Level: Information

Keywords: Classic

User: N/A

Computer: Belinda.DOMAINNAME.TLD


Firewall service started.

Event Xml:

<Event xmlns="">


<Provider Name="Microsoft Forefront TMG Firewall" />

<EventID Qualifiers="16384">14003</EventID>




<TimeCreated SystemTime="2012-06-28T01:19:00.000000000Z" />




<Security />





Log Name: Application

Source: Microsoft Forefront TMG Firewall

Date: 28.06.2012 03:26:45

Event ID: 14057

Task Category: None

Level: Error

Keywords: Classic

User: N/A

Computer: Belinda.DOMAINNAME.TLD


The Firewall service stopped because an application filter module C:Program FilesMicrosoft Forefront Threat Management Gatewayw3filter.dll generated an exception code C0000005 in address 000000007092254F when function CompleteAsyncIO was called. To resolve this error, remove recently installed application filters and restart the service.

Event Xml:

<Event xmlns="">


<Provider Name="Microsoft Forefront TMG Firewall" />

<EventID Qualifiers="49152">14057</EventID>




<TimeCreated SystemTime="2012-06-28T01:26:45.000000000Z" />




<Security />



<Data>C:Program FilesMicrosoft Forefront Threat Management Gatewayw3filter.dll</Data>







Das Problem trat nur an TMG 2010 mit der Versionsnummer 7.0.9193.500 auf, was dem Service Pack 2 ohne den beiden Update Rollups entspricht. Das Problem wird im KB 2658903 FIX: The Forefront Threat Management Gateway Firewall service (Wspsrv.exe) may crash frequently for a published website secured by SSL after you install Service Pack 2 beschrieben und durch das Update Rollup 1 behoben.

Warum gerade heute Nacht der Ausfall war ist mir noch ein RĂ€tsel.


Viele GrĂŒĂŸe

Dieter Rauscher
MVP Forefront

Spyware Blaster Database Update June 27 2012

• 17 new IE CLSID’s
• No new IE Restricted Zone entries
• No new entries for Mozilla
• 15350 total items in the database

Tech-ed Amsterdam 2012: Day 3

Last night I spent some time registering for Azure. It just took 5 minutes to sign up, and then sign up for the Virtual Machine functionality. All you need is a live id and a credit card. I’m still running on the 3 month trial for the moment.

Via the Azure management console you can configure the services that you use, and it is indeed as easy as it was said to be. One thing I found interesting is that a bigger machine is not more expensive than a small machine. It is just that IO for the small machine is cheaper. Interesting concept. Geographic distribution seems to be humongously expensive though. However when you think about it: if you want geographical redundancy for your own data center, it is going to cost you some real money as well.

The machine is running Windows 2012 RC, and I installed Visual Studio 2012 RC on it. Azure even was kind enough to give me the option to save the RDP file on my desktop. The one issue I can see is that I had to supply a DNS name for inside the cloud, and the one I picked originally was taken. Now, for me it is not a big deal to pick something else, but one has to wonder how many names will be picked by domain squatters.

The wireless is working flawlessly today. ‘Today’ has not yet officially started of course. It is still 45 minutes before the start of the first session, and the place is still deserted. The tables in one of the seating halls are still mostly empty, there is silence, and there is good coffee without lines at the machine. I bet I could get more work done here just sitting at a table, than sitting behind my desk at work. Silence is really undervalued.


This is a first for me. Having 2 keynotes in a row. I wonder what the point is. I did notice there is no closing keynote on the last day, probably because people tend not to show up. I guess moving the keynote is one solution, but not having 2 would be another.

Perhaps they’ve decided to do it at this moment because the delegate party was yesterday, and they anticipate that there will be many delegates with a hangover, or who might have trouble getting out of bed at a reasonable hour.

As I am waiting for the keynote to start, I am enjoying the music. They’re playing pimped up versions of songs like ‘somebody that I used to know’, with a lot of bass beats added to it. Not that I am a big fan of techno music, but I have to hand it to the organizers: they do know how to provide good sound. The quality is crisp, and the bass is strong. As a friend of mine used to say: you shouldn’t so much hear the music as feel the music. If your chest cavity is not resonating with the drum beats, the bass isn’t loud enough. I bet a Rammstein or Manowar song would sound great in this hall. There is no acoustic echo at all despite the high volume. Hint for the organizers: Hire Rammstein as the warming up act for the keynote.

Totally off topic: the coolest t-shirt I have seen so far is a simple black one with the text ‘I can see dead servers’. Thumbs up to whoever thought of that.

Apparently, the reason for the keynote split is that they wanted to have one for server 2012, and one for Windows 8. Initially I was not too thrilled about Metro, but in an environment that also contains tablets and phones, it does provide a compelling platform.

What does look impressive is that you get a Windows 7 VM with Windows 8 for running things in windows 7 mode. This is similar to the Windows XP mode that was available with Windows 7. What is new is that with a simple click, you can mount the virtual disk to explore it, or you can even add it to the boot menu, and boot directly from the disk image, thus giving you complete control of the hardware and all features (like 3D acceleration).

Another enhancement is that the touch gestures are supported via RDP. It’s not just the mouse input but the actual gesture messages.

What I thought was interesting as well is the Windows on a stick deployment. You can install windows on a memory stick and configure it fully, and then insert the stick in any computer and boot from the stick. You get full access to all hardware, except for the local disks which are hidden. This way you get all access to your own hardware (using the machine as nothing but a bunch of dead electronics) as well as isolation from any malware or other stuff which might be on that pc and which you really don’t want mixed with your (corporate) data and systems.

There was a mention that Metro apps used a new platform called WinRT which is separate from Win32. I guess that makes a lot of sense. Win32 is a dated subsystem, geared towards a paradigm which is essentially synchronous in nature. Win32 is full of blocking calls and synchronized concepts. You don’t want this behavior for tablets and phones. Blocking is bad. Whatever blocks should be handled asynchronously. To that effect, WinRT is ideal. It will probably cause more context switching and a bit more overhead, but feel more responsive. On server side, Win32 is just fine.

There was a brief mention of some changes to the Visual C# language (like the ‘await’) keyword which adds support for the fact that stuff happens asynchronously. That way, you don’t have to chain asynchronous events (like e.g. reading a block of file data and then shoving it in a decoder) with your own glue logic. Those things look obvious and interesting, and it makes me wonder why there is no C# language session about language changes to Visual C#.

DBI308: Practical uses and Optimization of new T-SQL features in SQL Server 2012

This session is hosted by Tobias Ternstrom.

It is a good thing I turned up early, because with 20 minutes to go, this room is packed and people are being turned away. This is for 2 reasons. This is the only remotely interesting session at this slot, and it is hosted in a rather small room. There is room for 150 to 200 ish people if I had to take a guess.

Tobias is a very good speaker. Swedish guy, and with a good sense of humor. The session went by quickly, and was very interesting for someone like me who uses and encounters T-SQL regularly without necessarily being knowledgeable enough to keep up to date with all changes.

The first thing he covered was a new way of generating ID values. It basically provides the same functionality as the IDENTITY field in normal database columns, but with the advantage of being unique across tables. In that respect they are like GUIDs, but you have a guarantee of getting sequential numbers.

Then there is the new ‘THROW’ keyword which can be used for exception handling in T-SQL within a try-catch construction. For now, the throw does not work across stored procedures, but that is on the todo list. I also learned a couple of things about RAISERROR that were new to me.

One very exciting feature I am looking for is the new support for window based calculations and aggregates. This feature allows you to calculate aggregate values across only a window of rows around the current row, or for example compare values with the previous row, where ‘previous’ depends on how your window is defined. Doing this with the current supported features involves either cursors or sub queries  which are error prone, ugly, and can make execution times explode.

The only problem with window based ranges is that once you start using them, it will be very hard to go back to your existing SQL2008 systems J

And finally, there are a lot of new functions that are basically only syntactic sugar, but which make life so much easier. Often, these are keywords that mirror what other environments like Excel do, and which are just very convenient and save you a lot of manual typing.

And for those who don’t know yet: Sweden once had a 30th of February J

When you are working with times and dates, you can get very weird edge cases and this is one of them. This is not really relevant to the topic at hand, but it does serve as a nice example that you can mention whenever someone says that time and date are easy.

DEV350: Using Windows Runtime and SDK to build metro style apps

This session was hosted by John Lam.

The room was packed. The session started with an overview of Metro, and how it uses the new Windows RT subsystem. There was an overview of the differences, like the fact that everything in RT is non blocking and asynchronous.

That concluded the powerpoint part of the session, and John proceeded with the demo. He built a metro style app in js, demonstrating the various things that were involved in making the application do something sensible.

I didn’t make any notes during this session, since it was all demo, and the entire demo pdf can be downloaded as well. It did again show the strength of Metro development.

It is worth mentioning that John is a very good speaker, and can take the audience along for a demo ride without rushing things or talking too quickly.

DEV368: Visual C++ and the Native Renaissance

This session is hosted by Steve Teixeira. My attendance here is a no-brainer. Steve is one of the best speakers at the whole of tech-ed. Even if C++ were not my favorite topic, I’d still be here.

I turned up an hour early for this session. Not because I expected a rush, but because I wanted to have a quiet place to type about the previous sessions. I didn’t get around to that because Steve was already there so we had the chance to talk.

One on-topic thing we talked about was standard conformance of the C++ compiler, which is a goal they are going for. When I asked about the ‘export’ keyword, Steve said it was removed from the standard. That kind of surprised me because I remember some ‘over my dead body’ quote from one of the committee members. I have to say it was a good decision. I remember an article by EDG which are the smartest compiler guys around, and they said that implementing it was very hard, and that it didn’t really work the way you would expect it to work.

Combined with the fact that there is no ABI for templates, it makes sense to finally get it out of the standard. If you can’t make the compiler standards compliant, then make the standard compilers-compliant J either way improves compliance.

Meanwhile, the crowd shuffles in steadily. Earlier on I joked to Steve that I had come early to beat the rush. By now the room is completely packed, and there were 10 to 15 people standing in the back of the room. A quick calculation says that there are about 130 to 140 people, which is amazing. I’d never have guessed that. I would have expected that it would be the same 30ish people I’ve seen in the other C++ sessions.

Speaking about people, the session had a graph early on about the age statistics for C++ programmers, to prove that we are not all greybeards. The average age turned to be about my age demographic: late 20s early 30s. As could be expected, the age curve dropped off with increasing age. There was one guy in the survey who was allegedly 92 years old. He was probably still waiting for a build to finish.

Steve talked about the renewed interest in C++, and some of the reasons for it. For one, with the variation of computing equipment (like ARM devices) cross platform portability is starting to matter again. And with C++ for the processing and for example XAML support for the GUI, C++ is a compelling option. Especially considering that memory footprint and cpu cycles have become important again with that variety of devices.

The fact that the language standard itself was finalized also has a great deal to do with it. Things like the auto keyword and shared pointer made a huge impact. There was an example with some old style C++ code on the left and the equivalent (using modern syntax) on the right. The right part was as readable as e.g. C#. Yet 10 years ago, we’d have looked at the example on the left and said ‘Yep, that’s some good clean C++ there’. It is kind of obvious why many people didn’t want to touch C++ with a ten foot pole.

With the new Metro style programming (which will be shown more in-depth tomorrow) a key point is that C++ apps can finally use the same designer and same designer patterns as C#, VB.NET or js programmers. This means you have up to date tools, AND you have the option to make your C++ app look cool.

With the introduction done, Steve covered things that Kate already touched upon earlier, like PPL and AMP. The demo was awesome. It was basically the visualization of a piece of paper, on which you could draw using touch. And then (again, using touch) you could take the page and flip it, with a live 3D rendering of the page as it would look if it were a real piece of paper. There were some additional ripple and wave effect that could be turned on and off. That was really impressive from a performance point of view.

Another important part of the demos covered the way you would design a Metro app in C++. I am not going to go in detail about that here, since this topic will be handled more in-depth tomorrow.

At the end of the demo there were some intelligent questions, which is not a surprise, given an obviously intelligent audience. When the session slot had ended and there was some time for one-on-one questions, Steve was immediately swamped. Clearly, interest in C++ is on the uptake.

DEV311: Taking Control of Visual Studio through extensions and Extensibility

This session was supposed to be hosted by Anthony Cangialosi and Anthony Lindeman. There was only one presenter, and I didn’t found out there were 2 names on the planning until after the session, so sadly I can’t say which of these 2 people presented the session.

I chose this topic because at this session slot, there was little else that was relevant to me, and VS extensibility is something I played with before. Incidentally, my experiences with VS extensibility were mentioned right at the beginning of the demo as ’the bad old days’.

It used to be that just for being allowed to use the extensibility SDK you had to register and be approved, and sign paperwork and jump through various hoops. Documentation and examples were not too great either in those days.

These days, you can extend without prerequisites and all you need is the Visual Studio SDK. From there, you are all set and you can start building both your VS extension and the deployment package needed to distribute it in a proper manner.

Here I didn’t take many notes either, because like the Metro session, it was largely demo driven. In this particular demo, the presenter started with sample boilerplate, and fleshed it out as he went along. The demo was ok. It looked interesting, but for now not something I can really use.


Today was interesting. I learned a lot about Metro, and seeing it in the whole picture instead of just as a new type of desktop, it is starting to make sense.

The highlight of the day was of course without a doubt Steve’s talk. The session itself was of course interesting, but what made it special was that the room was packed. And again I think it is weird that there are more talks about C++ than all other languages combined. You won’t hear me complaining though.

Internet was up during the entire day with only a single glitch (as far as I noticed) so that is a definite improvement.

I also got my free book on Windows 2012 Server, which is nice. Speaking about books: to my amazement I discovered that this year, there is no bookstore in the exposition hall, or indeed anywhere on site. Whoever made that decision should have his head checked, because tech  books sell like hotcakes on tech events like these. Geeks love tech books at least as uch as free t-shirts.

 I’ve bought books every year I’ve been to tech ed. The guy at the MS Press booth had gotten many complaints already, and while he was explaining the situation to me, several other people dropped by and said ‘What?! No books!?’

By the end of the day I was dead tired and glad to get back to the hotel. I think I’ll turn in early tonight, after spending some more time playing with VS 2012 on my Azure machine.

Ivy Bridge: Intel’s New Processor Features (3rd Generation i3, i5, and i7)

Intel has finally released the new 3rd generation series of Intel Core processors. This 3rd generation of the Intel Core processors brings many new features and improvements over the earlier 2nd generation of processors. In this article, I am going to point out some of these exciting features that every computer enthusiast ought to know. […]

Ivy Bridge: Intel’s New Processor Features (3rd Generation i3, i5, and i7)

Intel has finally released the new 3rd generation series of Intel Core processors. This 3rd generation of the Intel Core processors brings many new features and improvements over the earlier 2nd generation of processors. In this article, I am going to point out some of these exciting features that every computer enthusiast ought to know. […]

Tech-Ed Amsterdam 2012: Day 2

Breakfast was the same as yesterday. I thought of going someplace else, but didn’t for 3 reasons. First, tech-ed starts at 08:30, and the restaurants are at the other side of the RAI. I would really have to hurry in order to get there, have a meal, and go to the RAI. Second, the food at the restaurants is so great that my evening meal makes things up again. And lastly … a sizable portion of the world population is dying of famine and dehydration. During the time it took for you to read this paragraph, several people just gave up and died. So it would be a bit snobbish to make a big deal out of it and scorn the perfectly good food I get given.

Now, tech-ed. There are noticeably more people here. Yesterday, I saw the exhibition hall when the builders were in the progress of building the booths. I have to say that it will be impressive if everything is finished this morning, because yesterday it was still an unholy mess, like you can expect on a big construction site that is nowhere near due date.

The other big hall where people gather in between session is completely devoid of seating arrangements. I’ll have to check out the exhibition hall later to see what it looks like. I am used to seeing these kinds of areas full of bean bags and other things that allow you to sit.

Of course yesterday evening I had to phone the home front and talk with my oldest daughter whom is apparently missing me very much. And she started asking about her present, because of course I can’t go ‘on work holiday’ without bringing back a present. I also had to explain who this ‘Kate person’ was that I had spent the day hanging out with. This was a point of interest for my wife as well J

My youngest is much more emotionally independent. She only has to know that I come back and that I’ll have a present, and she’ll be satisfied.

The key note

It was a usual Microsoft keynote, which started with a lot of action movie music and light effects. Lots of deep basses. The keynote was delivered by Brad Anderson, and intermediate speakers like Mark Russinovich. The room was almost completely full.

As an aside, I have to mention that this is the first ech-ed where the wifi experience is plain crappy. The network stays up, but the internet connection keeps crapping out. The occasional connection comes through, and then it stays up for a while and goes down again. At first I thought it was my laptop, but then I noticed the guy next to me having the same problems. I have a feeling that whoever was in charge failed to anticipate the load that 8000 nerds would place on the internet connection.

Mobile 3G and stuff like that seems to work though, given the number of mobile users who could connect to the demo application.

The keynote started with a quick overview of Microsoft Hyper-V and all the goodness you now get out of the box with Windows Server 2012. The number they showed certainly looked impressive. Per VM 64 cores, 1TB (or was it 4?) of memory, and over a million IOps of data transfer. It is very uncommon for vice presidents to mention the competition during a keynote. The name ‘VMWare’ was used quite a lot. I really had the feeling that Microsoft was throwing down a gauntlet.

In all fairness, the numbers shown were certainly impressive, and seem to give VMWare a run for its money. And the important consideration about that is that you do get a lot out of the box with Windows Server 2012, whereas with ESX you don’t. I am not an administering a VM host environment so I could be wrong, but it does look more flexible and powerful.

After that there was a demo of some of the Azure features coupled with Visual Studio 2012. As far as keynotes go, this one wasn’t too bad. The Azure demo made me change the selection of the next session I am going to. Originally I was going to see something about SQL in hybrid IT, but I decided to go to ‘Windows Azure Today and Tomorrow’  instead.

I should point out that there was not much to choose from for the first session slots. I have the impression that they kept Tuesday morning free form real content so that the late arrivals would not miss anything important.

FND05: Windows Azure Today and Tomorrow

This talk was hosted by Scott Guthrie. A very knowledgeable person for sure, but not a natural speaker like Mark Russinovich.

Scott explained a bit about Azure, and how the payment plan works. In a first for Microsoft (in my experience) you only pay for what you actually use. You can dynamically increase the cpu, memory, storage and other things when you need them, and you only pay for the time you are using them, after which you can just reduce your hardware / services.

A basic virtual machine with windows Server 2012 costs almost nothing. I would mention the cost here if the wireless actually worked and I could check. I am not entirely clear right now if the metric looks at hours in use, or hours running with a given configuration, and whether it counts the hours if the machine is shut down.

The latter seems weird of course, but consider my scenario: I want to have a machine that is performant enough for running Visual studio, debug my various hobby projects, and I want to be able to use that machine from anywhere. Currently, that is a windows 7 machine in my basement, with 4GB ram and a Core2Duo. It is getting dated but still fast enough.

However, I might need to replace that machine in the near future, and for the cost of buying a new machine, it might be worthwhile to run my dev machine in Azure cloud. Especially if it would not count the hours during which I am not using it, which would make it dirt cheap. And I would have the advantage of being able to work from my laptop in the living room, or a hotel room without needing to worry about my data or the performance metrics of whatever machine I am using.

There was some more talk about Azure and the various user scenarios. Mar Russinovich went in-depth in the next session, so I’ll not cover them here.

AZR208: Windows Azure virtual Machines and Virtual Networks

This session was hosted by Mark Russinovich. I’ve heard him speak before and he is a good speaker. Mark started with an explanation of Azure, and private clouds. One thing that was made clear is that cloud machines are just VHD files, just like normal Hyper-V machines. This means that transferring machines to and from the cloud, or different cloud providers, is completely transparent. There is no lock-in.

One way to use the cloud is to create a VM in the cloud, move your application there, and then scale up the VM as needed. On top of that, you could choose to run components of the application on cloud services. One such service is SQL server, which can be scaled up to Godzilla like proportions. The good thing (other than not having to maintain the monster hardware) is that Microsoft takes care of patching and other things.

In your virtual machines, you can also add storage on an as-needed basis, which will be backed by cloud storage solutions. The virtual disks will be stored on redundant disks in the SAN, meaning that you are isolated from normal disk problems that might occur. Your own disks can of course be configured for max performance, like striping.

Mark then covered things like how services are organized in different groups so that software patching and network maintenance can be done in a manner that there will no be resulting downtime for your applications.

And finally there was an explanation of virtual networks. It is a given that the different servers in your collective can talk to each other, but in an enterprise environment, you may want to domain-join those machines. And of course, you would not want that to happen over an internet connection. To that purpose, Azure supports a VPN connection to your own infrastructure. This is hardware VPN, and a nice feature is that Azure can generate VPN configuration scripts for most commone firewall manufacturers, like Cisco and Juniper. Once that is set up, those machines appear to be on your own local network.

I thought that that was a particularly cool feature. Because it allows a company to move a great lot of (non critical) machines up to the cloud, where their cost can be budgeted up front, and no on-site personel is needed to support the infrastructure. Currently, if you are running your own VM infrastructure, you are supporting it all, and you may have a lot of infrastructure, which may cost a lot more money than needed. Then there is the square meter price of your data center, electrical power and cooling, … even if you move only the non critical machines to Azure, a lot of benefit can be had.

A final thing that is worth mentioning is that Azure is currently run across 16 massive data centers which can take over from each other. So if one data center goes offline due to a meteor strike (to name a cool example), another can seamlessly take over. In fact, Mark mentioned that it is a hard promise that any data store change is replicated to at least one data center in the same geopolitical area within 15 minutes. This means that data from EU companies stays in the EU, and US data stays in the US, etc. For some people this is irrelevant, but many companies that are subject to regulatory bodies have strict requirements to make sure that certain data many never leave the EU or the US.

Before tech-ed, I had a fairly jaded view of Azure, but after the things I have seen in this and the previous sessions, I have come around. Azure (or clouds in general) are the way of the future. We are still in a transitional phase where companies start their own private clouds (Be they Hyper-V or ESX or something else) but between this and 10 years, I suspect that many companies will move a great deal of servers into a cloud that they don’t manage themselves. After all, why would they?

And given that there is no cost of entry, that you only pay for what you use and that you can scale up and down dynamically, there is no doubt in my mind that this will take off.

WSV205 Windows Server 2012 Overview

This talk was hosted by Michael Leworthy.

As soon as I sat down and he started talking and showed an overview of his topics, I realized I had made a big mistake. This was going to be a marketing talk, ‘look how great we are’-style. I gave it 5 more minutes in which I was proven correct, and I decided to leave. There are few enough Visual Studio talks as it is, and I changed to DEV213.

DEV213: What’s new in Visual Studio 2012

This session is hosted by Orville Mcdonald. It was already 10 minutes underway when I came in, but I managed to pick up easily enough.

Right when I came in, he was going to demo how easy it was to develop metro apps in Visual Studio 2012.The main reason for Metro is to have a unified approach to developing for multiple platforms, so that your app might be useable on your desktop, on your tablet and on your phone.

I cannot judge on how easy it is to develop metro style, but testing it sure looked greate. There is a simulator that can be used to test your app in real scenarios. The simulator can do all the things any real tablet could do. The orientation can be changed, you can ‘slide’ your finger, and do all manual manipulations in a simulated way.

Then there was a demo of migrating a web application from a local SQL express database to one hosted on Azure. I can’t comment much on this, except that it is what I would expect from a database migration. The fact that it is in the cloud is less interesting, and I talked about that already.

One of the annoying things in VS2012 that is new and which I cannot believe made it into the release candidate, are the full caps menus. It is the one and only application I know with a menu in all caps, and I hope they reassess that decision. It is loud. Don’t believe me? Consider HOW RELAXING IT IS TO SPEND ALL DAY LOOKING A MENU IN ALL CAPS! LOUD,  ISN’T IT?!?!


I was told that this will become optional in the RTM.

To be honest, I had hoped that this session would be more about language and debugging topics, but I guess Metro and Azure are the new kids on the block. In any case it was interesting so see how it works and how it can be debugged.

DEV316: Application lifecyle management tools for C++ in Visual Studio 2012

This session is hosted by Rong Lu. I managed to talk to her in private for a couple of minutes, because I wanted to know what exactly was covered. As soon I told her that I had been to Kate’s pre-conference talk, she told me that if I had any other place to go, it might be worthwhile doing that, because she’d cover the same topics. She told me she would cover those same topics a bit more in depth.

I thought to go to WCL332 instead, but that turned out to be about deployment tools and deployment diagnostics. Not really my cup of tea so I decided to go to DEV316 after all.

The first thing I noticed was that the crowd was the same as during Kate’s pre conference talks. No surprise really. The 30 of us are probably the only C++ programmers in the whole of tech-ed.

Rong covered architectural discovery. The main user scenario for this feature is analyzing code that someone else wrote. C++ codebases tend to live a long time, and many C++ programmers have to maintain or update code they didn’t wrote themselves. In short, the architectural discovery tool builds a diagram with the biary components. These components can then be broken down in explorable and expandable layers. It is possible to edit and save the diagram and mark it up.

This is sure a handy tool for analyzing other peoples code, as well as creating images for writing software design documents. It was undecided at this point, but this functionality will probably be reserved for the Ultimate editions of VS for the next release.

The next demo covered static code analysis. This works really user friendly. You can easily figure out the problem, and there is even a right-click mentu item for inserting the suppression pragma if you want so.

There are a couple hundred of rules that are checked by the code analysis. Rule sets are programmer configurable. 64 bit support, and all rules available from Pro version and above, including concurrency analysis rules.

The unit testing framework for unmanaged C++ was shown again. This is available for all versions, though it will be really basic below VS Professional. From Premium onward, continuous run after build is available, which allows you to run the unit tests with every build. The unit testing framework is extensible by 3d party framework.

Code coverage results were available with a single click to take you to the code coverage results. The code itself was then Blue lines vs red lines. It looked very well made, and it will certainly be very useful for insuring the quality of algorithms.

This session topic was very interesting, and Rong Lu held an excellent presentation. Kate Gregory was there as well.


Day 2 was filled with lots of good information. Azure was the main surprise for me. And Rong Lu’s presentation was worthwhile as well.

One interesting factoid: This edition of tech-ed there are more C++ language talks than C#, VB and  F# combined J

Oh, and the internet connection stayed down until the end of the day. I was told at the wireless booth that they were trying to fix it. Wireless was up, but internet for the entire RAI had gone down. Perhaps they’ll figure it out by tomorrow.

Windows 7 : Erreur 0x8024200D ou 0x800f081f – Installation Service Pack 1

Patrice2012Bonsoir tout le monde,

Lorsque vous installez Windows 7 Service Pack 1 (SP1), vous pouvez recevoir le message d’erreur suivant : Erreur 0x8024200D.

Ce problĂšme peut se produire si Windows 7 Service Pack 1 (SP1) a Ă©tĂ© tĂ©lĂ©chargĂ© par Windows Update ( mais n’a ne pas encore Ă©tĂ© installĂ© ) ou si l’installation a Ă©tĂ© endommagĂ©e. Pour obtenir sur ce service pack, suivez ce lien

Pour rĂ©soudre le problĂšme de maniĂšre automatique avec l’aide du support Technique "Fix-It" suivez ce lien
Si vous prĂ©fĂ©rez rĂ©soudre le problĂšme vous-mĂȘme, suivez les indications ci-dessous :

DĂ©sinstaller Windows 7 Service Pack 1
Pour rĂ©soudre ce problĂšme, dĂ©sinstallez le Service Pack 1 Ă  l’aide de la commande DISM . Pour ce faire, procĂ©dez comme suit pour votre version de Windows 7.
Pour la version 32 bits de Windows 7
1. Cliquez sur DĂ©marrer, puis tapez cmd dans la zone de recherche.
2. Avec le bouton droit cmd.exe, puis cliquez sur ExĂ©cuter en tant qu’administrateur.
3. Tapez la commande suivante et appuyez sur ENTRÉE :
DISM.exe / online /Remove-Package /packagename:Package_for_KB976932~31bf3856ad364e35~x86~~
4. Une fois la suppression terminĂ©e, tapez quitter, puis appuyez sur 300x250_Accelerateur Windows 8ENTRÉE.
5. RedĂ©marrez l’ordinateur.

Pour la version 64 bits de Windows 7
1. Cliquez sur DĂ©marrer, puis tapez cmd dans le Recherche zone.
2. Avec le bouton droit cmd.exe, puis cliquez sur ExĂ©cuter en tant qu’administrateur.
3. Tapez la commande suivante et appuyez sur ENTRÉE :
DISM.exe / online /Remove-Package /packagename:Package_for_KB976932~31bf3856ad364e35~amd64~~
4. Une fois la suppression terminĂ©e, tapez quitter, puis appuyez sur ENTRÉE.
5. RedĂ©marrez l’ordinateur.

Avez-vous installĂ© la version bĂȘta de Windows 7 SP1?
Si vous avez installĂ© la version bĂȘta de Windows 7 SP1, vous devez dĂ©sinstaller la version bĂȘta, puis installez la version finale du Service Pack 1. Pour dĂ©sinstaller la version bĂȘta, procĂ©dez comme suit pour votre version de Windows 7.

Si vous avez dĂ©sinstallĂ© la version bĂȘta de Windows 7 SP1 et que vous continuez Ă  recevoir l’erreur, vous pouvez avoir des restes de la version bĂȘta sur votre PC. Ces Ă©tapes seront Ă©galement nettoyer tous les restes de la version bĂȘta Ă  partir de votre PC.

Pour la version 32 bits de Windows 7
1. Cliquez sur DĂ©marrer, puis tapez cmd dans la zone de recherche
2. Avec le bouton droit cmd.exe, puis cliquez sur ExĂ©cuter en tant qu’administrateur.
3. Tapez la commande suivante et appuyez sur ENTRÉE :
DISM.exe /online /Remove-Package /packagename:Package_for_KB976932~31bf3856ad364e35~x86~~
4. Une fois la suppression terminĂ©e, tapez quitter, puis appuyez sur ENTRÉE.
5. RedĂ©marrez l’ordinateur.

Pour la version 64 bits de Windows 7
1. Cliquez sur DĂ©marrer, puis tapez cmd dans la zone de recherche.
2. Avec le bouton droit cmd.exe, puis cliquez sur ExĂ©cuter en tant qu’administrateur.
3. Tapez la commande suivante et appuyez sur ENTRÉE :
DISM.exe /online /Remove-Package /packagename:Package_for_KB976932~31bf3856ad364e35~amd64~~

4. Une fois la suppression terminĂ©e, tapez quitter, puis appuyez sur ENTRÉE.
5. RedĂ©marrez l’ordinateur.300x250_Accelerateur Windows 8

Lorsque vous installez Windows 7 ou Windows Server 2008 R2 SP1 vous pouvez Ă©galement recevoir l’erreur 0x800f081f.

Lorsque vous archivez le journal CBS log (C:WindowsLogsCBScbs.log), vous trouverez les erreurs comme ci-dessous :

2011-03-03 21 : 38 : 06, Erreur CBS Exec : Échec du package prĂ©configurer : Package_for_KB976933 ~ 31bf3856ad364e35 ~ amd64 ~ cs-CZ ~ 6.1.7601.17514, fichier : TsUsbGD.sys, source : \?C:WindowsServicingPackages
, « sandbox »: (null) [HRESULT = 0x800f081f – CBS_E_SOURCE_MISSING]
2011-03-03 21 : 38 : 06, Info CBS n’a pas pu rassembler tous les fichiers requis. [HRESULT = 0X800F081F – CBS_E_SOURCE_MISSING]

Pour résoudre le problÚme, vous pouvez utiliser les méthodes suivantes :
1. ExĂ©cuter Outil de prĂ©paration de mise Ă  jour systĂšme (KB947821). Il doit rĂ©soudre l’erreur dans la plupart des cas
2. Si l’outil de prĂ©paration de mise Ă  jour systĂšme ne rĂ©sout pas le problĂšme, il peut ĂȘtre que vous avez une version prĂ©liminaire des outils RSAT installĂ© sur le systĂšme.

Pour résoudre ce problÚme, effectuez les opérations suivantes :
a. DĂ©sinstallez les version prĂ©liminaire des outils RSAT 
b. RedĂ©marrez le systĂšme 
c. Installez la version finale de la (outils RSAT
d. RĂ©installation de Windows 7 SP1

Bon dépannage et bonne soirée.

New Best Practice for RPC Timeouts in Exchange

Exchange 2010 and 2007 use RPC (Remote Procedure Calls) for all client and RPC proxy calls.  For example, email clients (Outlook, Outlook Anywhere (OA), and ActiveSync) use RPC for MAPI connectivity. 

The default keep alive time for RPC connections uses the IIS idle connection timeout, which is 15 minutes.  This usually doesn’t cause a problem on local LAN or WAN connections, but routers and switches that are used to connect Internet clients to internal Exchange servers often have more aggressive timeouts.  Typically these network devices have a 5 minute timeout which causes problems for external clients, particularly Outlook Anywhere, iPhone, and iPad clients.  Symptoms include messages stuck in the Outbox and poor email performance on the remote clients, and high CPU utilization on the Exchange Client Access Servers (CAS).

The new best practice is to adjust the RPC keep alive timeout value on the Client Access Server from 15 minutes to 2 minutes.  Since RPC is a function of Windows, not Exchange, this value is adjusted under the Windows NT registry key.  The value is located here:

HKLMSoftwarePoliciesMicrosoftWindows NTRPCMinimumConnectionTimeout

Normally the MinimumConnectionTimeout DWORD value does not exist, which means RPC uses the default value of 900 seconds (15 minutes).  To adjust it, create or modify the MinimumConnectionTimeout value and set the value to decimal 120 (seconds, or 2 minutes).  IIS must be restarted on the CAS to affect the change.

The following command will create the appropriate values:

reg add “HKLMSoftwarePoliciesMicrosoftWindows NTRPC” -v “MinimumConnectionTimeout” -t REG_DWORD -d 120

The Outlook and ActiveSync clients honor this new timeout during the connection to the CAS, so both client and server now send a Keep-Alive packet after two minutes of inactivity, effectively maintaining both TCP connections needed.

A colleague of mine works for a large global company that was affected by this.  They have several thousand iPads connecting to nine load balanced CAS servers and all the CAS were peaking at 100% CPU utilization.  Once they implemented this change the average load on the CAS is now 20-30% and the iPad performance is much improved.

This is my new best practice and I make this change on every Exchange CAS deployment.  For more information about RPC over HTTP see Configuring Computers for RPC over HTTP on TechNet.

Microsoft Cloud Day slides

My slides and those of other presenters at last weeks Microsoft cloud day have been published

Microsoft Cloud Day slides

My slides and those of other presenters at last weeks Microsoft cloud day have been published

Tech-ed Amsterdam 2012: Day 1

As it turns out, my prediction about breakfast turned out to be true. In fact, I’d say it is not even worthy of the name ‘breakfast’. There were a handful of bread rolls, no bacon, no cheese, no meat, no honey, …. They did have some pre-packaged jam, chocolate paste, cream cheese, and hard boiled eggs. Exactly what I would have expected from such a loungy modern artsy place. Hip people apparently don’t eat real food. I’ll have to see if there is a good breakfast place available around here.

At least the coffee was good.

I’ll have to find a pharmacist today to buy some earplugs. I never travel without. And I am certain I put a pair of earplugs in my travel bag. There must have been a freak quantum event, opening up a wormhole in my luggage, making the plugs disappear.

To be fair, the hotel rooms are very quiet and sound proof. But the shower head was dripping. It stopped after  while, but I figured I might put in my earplugs so that I didn’t wake up if it started again. Yet  I didn’t have any. I figured that if McGyver could escape from a tub of acid with only a chocolate bar, I could improvise earplugs with the items available to me in my room.

I am not entirely certain, but there is a good chance I am the first person to improvise ear plugs, using only gummi bears and the cellophane wrapper of a plastic cup. The result was surprisingly ergonomic and effective. J

The RAI is only a good 5 minutes walking from the hotel, which is nice. The RAI itself is still quiet. The crew are still in the process of building up the event halls. There is something nice about walking around in such vast halls when things have not yet started. You’re unnoticed and all by yourself while being around other people. The coffee is great .

I was here early, and managed to get hold of Kate Gregory while she was preparing for the session. We spent over half an hour just catching up on things and life. That’s one of the good things about tech ed. You get to meet the same people again and again, and keep in touch across jobs and years.


Kate started the day with an overview of the new C++ language features of Visual Studio 2010 and 2012. More specifically, auto, shared_ptr and unique_ptr. It’s been a while since I programmed native C++, but I have to say that with just these 3 things alone, C++ has become a lot more readable and robust. I’ll have to play around with C++ again to get a feel for it.

To be more precise, I am writing a parser / compiler / code analysis tool in my free time for the DeltaV phase code running our plant. I already had a basic version that I can use, but it is rather ugly, and does not really produce an executable parse tree. It uses regular expressions to parse the code, and does not support all language features.

For the sake of doing it, I have started a new project in my free time to build a real tokenizer / parser that can be used later on to run the code in simulation mode, and to perform more detailed static analysis. Currently I am doing that in C# for convenience sake. With over 100 megabytes of DeltaV code to crunch through, it will be interesting to see if the same algorithm in native C++ (using smart pointers and other new C++ goodness) is going to be faster and smaller.


After the break, the topic changed to Application Lifecycle Management, which is interesting, but not really applicable to me because at the moment I am not working in a development team environment. ALM is basically what Feam Foundation used to be. ALM is going to be in all versions of VS. The amount of functionality will depend on which version you have.

I’ve heard good things about TFS, and many horror stories about setting it up, which can easily take 2 to 3 weeks of billable consulting time. To mitigate that issue, Microsoft has now provided a TFS hosted on Azure. For now it is free, though it may not stay that way. For smaller companies, this may become a very good option, depending on the SLA of course. You don’t want to discover one day that your project history is gone for good. I suspect that it will start costing money if you want a set SLA. Even then it will be much more cost effective for small companies than hosting their own TFS and dealing with maintenance.

Code coverage and unit testing

The next part of the talk covered code analysis, unit testing and code coverage for native projects. I have to say that that looks impressive. For code analysis, there is now support for native code, meaning you can get code and class diagrams for native code, where that used to be only for managed code. Unit testing and code coverage works pretty much like they work for managed  code.

Instead of working with method attributes, there are macros that provide similar functionality, and you don’t even have to know how they work. The unit testing provides test results, code coverage, and various UI features that make them very convient. I sure wish I had that available on earlier projects.


Lunch was ok. It was a sandwich / salad lunch. The sandwiches were good and the company was great, since I had lunch with Kate. The only downside is that I probably needed half the calories of the lunch to get to and from the lunch hall.

Library vs language

After the break, the topic switched to lambda functions, and how they can be used with for_each. Following from that she covered the parallel_for_each, aloowing the programmer to distribute a for_each loop across multiple CPU cores. That looks very interesting, allowing programmers to make a massive boost in speed for repetitive tasks that are not interdependent.

My current code parser (the one I wrote for analyzing our process control code) already uses parallelization, but does so manually via a thread pool and explicit handling of task completions. This looks interesting.

With the additional performance gain of using native memory management, it will be interesting to see if a native implementation of my DeltaV code parser will be quicker. That would also be a good opportunity to get re-acquainted with C++ and the new language features, as well as keep my development portfolio up to date. If I ever want to get back into development again, I’ll need to be able to have something to show for the last couple of years.

The final leg of parallelization example was done with AMP. The MP stands for ‘Massive parallellization’, and uses the GPU instead of the CPU. The video car needs to be DirectX compatible. If it is, then you can create small tasks that can be distributed in parallel by the GPU. As you all (should) know, a modern video card contains dozens or hundreds of pipelines which are perfect at executing simple drudge work.

Those pipelines suck at branching. They can’t handle it properly, and even if they can, performance goes from warp to suck. If you need to branch or do anything complex, you have to stick with the cpu cores. But anything that can be broken down to ‘do this simple task a gazillion times’ will make your GPU scream without even breaking a sweat.

At the end of the talk, there was an overview of the different type of container, and their pros and cons, and then the last part of the talk was about algorithms, and some general programming remarks.


I had a great day talking with, and listening to Kate. Somehow the day just flew by. I really like C++, make no mistake, but I had my reservations about 8 hours of C++. Yet I shouldn’t have worried because the talk was very interesting, and Kate covered a lot of diverse topics.

I forgot to mention it yesterday, but I had great Japanese food at restaurant Takara. Takara is Japanese for ‘treasure’. Very good food at a very reasonable price. Tonight I had a steak at restaurant ‘Toon grill’ (Toon rhymes with Tone). It was argentinian beef, and one of the best steaks I’ve ever eaten.

All in all, day 1 was very much worth it.

Excel Replace Asterix and Tilde

Example 1: How to find and to replace numbers

To replace wildcard characters (*) in a numeric value in a worksheet cell, follow these steps:

  1. Type 494** in cell A1.
  2. Select cell A1.
  3. On the Edit menu, click Replace.
    Note In Microsoft Office Excel 2007, click Find & Select in the Editing group on the Home tab, and then click Replace.
  4. In the Find what box, type ~*. To do this, press TILDE, and then press ASTERISK.
  5. In the Replace with box, type 2.
  6. Click Replace.

Cell A1 now reads 49422.

  • When you click Replace All, Excel makes the change throughout the worksheet. When you click Replace, Excel changes only the currently active cell and leave the Replace dialog box open.
  • When you type an asterisk without a tilde in the Find what box, Excel replaces all entries with a 2. Excel treats the asterisk as a wildcard character. Therefore, 494** becomes 2.

Example 2: How to find and to replace a tilde

To replace a tilde in an Excel worksheet cell, follow these steps:

  1. Type Micros~1.xls in cell A1.
  2. Select cell A1.
  3. On the Edit menu, click Replace.
    Note In Excel 2007, click Find & Select in the Editing group on the Home tab, and then click Replace.
  4. In the Find what box, type ~~.
  5. In the Replace with box, type oft.
  6. Click Replace All.

Cell A1 now reads Microsoft1.xls.

Recent Comments