Amid almost no fanfare whatsoever, Microsoft yesterday released a tool Iâve been begging them for over the last five or six years.
[This is not unusual for me to be so persistently demanding, as Iâve found itâs often the only way to get what I want.]
As youâve guessed from the title, this tool is the âSDL Threat Modeling Tool 2014â. Sexy name, indeed.
Well, yeah, kind of. Thereâs the TAM Threat Analysis & Modeling Tool, which is looking quite creaky with age now, and which I never found to be particularly usable (though some people have had success with it, so Iâm not completely dismissive of it). Then thereâs the previous versions of the SDL Threat Modeling Tool.
These have had their uses â and certainly itâs noticeable that when I work with a team of developers, one of whom has worked at Microsoft, itâs encouraging to ask âshow me your threat modelâ and have them turn around with something useful to dissect.
In a word, Cost.
Threat modeling tools from other than Microsoft are pretty pricey. If youâre a government or military contractor, theyâre probably great and wonderful. Otherwise, youâll probably draw your DFDs in PowerPoint (yes, thatâs one of the easier DFD tools available to most of you!), and write your threat models in Word.
Unless, of course, you download and use the Microsoft SDL Threat Modeling Tool, which has always been free.
The SDL TM tool itself was free, but it had a rather significant dependency.
Visio.
Visio is not cheap.
As a result, those of us who championed threat modeling at all in our enterprises found it remarkably difficult to get approval to use a free tool that depended on an expensive tool that nobody was going to use.
With the release of Microsoft SDL Threat Modeling Tool 2014, Microsoft has finally delivered a tool that allows for the creation of moderately complex DFDs (you donât want more complex DFDs than that, anyway!), and a threat library-based analysis of those DFDs, without making it depend on anything more expensive or niche than Windows and .NET. [So, essentially, just Windows.]
Yes, that means no Visio required.
A quick bullet list of some of the features youâll like, besides the lack of Visio requirement:
Yes, every good blog post has to have one of these, doesnât it? What am I asking you to do with this information?
Download the tool. Try it out on a relatively simple project, and see how easy it is to generate a few threats.
Once youâre familiar with the tool, visit the KnowledgeBase directory in the toolâs installation folder, and read the XML files that were used to create your threats.
Add an object type.
Add a data flow type.
Add custom properties that describe your custom types.
Use those custom properties in a rule you create to generate one of the common threats in your environment.
Work with others in your security and development teams to generate a good threat library, and embody it in XML rules that you can distribute to other users of the threat modeling tool in your enterprise.
Document and mitigate threats. Measure how successful you are, at predicting threats, at reducing risk, and at impacting security earlier in your development cycle.
Then do a better job on each project.
I hate it when the Internet doesnât know the answer â and doesnât even have the question â to a problem Iâm experiencing.
Because it was released during the MVP Summit, I was able to download the Visual Studio 11 Beta and run it on a VS2010 project.
Thereâs no âconversion wizardâ, which bodes well, because it suggests that I will be able to use this project in either environment (Visual Studio 2010 or the new VS11 beta) without any problems. And certainly, the project I selected to try worked just fine in Visual Studio 11 and when I switched back to Visual Studio 2010.
Unfortunately, one of the things that I noticed when building my project is that the code analysis phase crapped out with fourteen instances of the CA0053 error:
As you can see, this is all about being unable to load rule assemblies from the previous version of Visual Studio â and is more than likely related to me installing the x64 version of Visual Studio 11 Beta, which therefore canât load the 32-bit (x86) DLLs from Visual Studio 2010.
Curiously this problem only exists on one of the projects in my multi-project solution, and of course I couldnât find anywhere in the user interface to reset this path.
I thought for a moment I had hit on something when I checked the projectâs options, and found the Code Analysis tab, but it didnât seem to matter what I did to change the rule set, there was no place to select the path to that rule set.
Then I decided to go searching for the path in the source tree.
There it was, in the projectâs â.csprojâ file â two entries in the XML file, CodeAnalysisRuleSetDirectories and CodeAnalysisRuleDirectories. These consisted of the simple text:
<CodeAnalysisRuleSetDirectories>;C:\Program Files (x86)\Microsoft Visual Studio 10.0\Team Tools\Static Analysis Tools\\Rule Sets</CodeAnalysisRuleSetDirectories>
<CodeAnalysisRuleDirectories>;C:\Program Files (x86)\Microsoft Visual Studio 10.0\Team Tools\Static Analysis Tools\FxCop\\Rules</CodeAnalysisRuleDirectories>
As you can imagine, I wouldnât normally suggest editing files by hand that the interface normally takes care of for you, but itâs clear that in this case, the interface wasnât helping.
So, I just closed all currently open copies of Visual Studio (all versions), and edited the file in notepad. I kept the entries themselves, but deleted the paths:
<CodeAnalysisRuleSetDirectories></CodeAnalysisRuleSetDirectories>
<CodeAnalysisRuleDirectories></CodeAnalysisRuleDirectories>
Errors gone; problem solved.
Youâre welcome, Internet.
So, I’ve submitted my information for re-awarding as an MVP – we’ll see whether I’ve done enough this year to warrant being admitted again into the MVP ranks.
Next week is the MVP Summit, where I visit Microsoft in Bellevue and Redmond for a week of brainwashing and meet-n-greet. I joke about this being a bit of a junket, but in reality, I get more information out of this than from most of the other conferences I’ve attended – perhaps mostly because the content is so tightly targeted.
That’s not always the case, of course – sometimes you’re scheduled to hear a talk that you’ve already heard three different times this year, but for those occasions, my advice would be to find another one that’s going on at the same time that you do want to hear. Talk to other MVPs not in your speciality, and find out what they’re attending. If you feel like you really want to get approval, ask your MVP lead if it’s OK to switch to the other session.
Very rarely a talk will be so strictly NDA-related that you will be blocked from entering, but not often.
Oh, and trade swag with other MVPs. Very frequently your fellow MVPs will be willing to trade swag that they got for their speciality for yours – or across regions. Make friends and talk to people – and don’t assume that the ‘industry luminaries’ aren’t willing to talk to you.
Also this week, comes news that I’ve been recognised for authoring the TechNet Wiki article of the Week, for my post on Microsoft’s excellent Elevation of Privilege Threat Modeling card game. Since that post was made two years ago, I’ve used the deck in a number of environments and with a few different game styles, but the goal each time has remained the same, and been successfully met – to make developers think about the threats that their application designs are subject to, without having to have those developers be security experts or have any significant experience of security issues.
Iâve written before on âFull Disclosureâ:
Recent events have me thinking once again about âfull disclosureâ, its many meanings, and how it makes me feel when bugs are disclosed publicly without allowing for the vendor or developer to address the bug for themselves.
The post that reminded me to write on this topic was Tavis Ormandyâs revelation of the Help Control Protocol vulnerability, but it could be anyone that triggered me to write this.
If your motivation is to help secure users and their systems, then I think your disclosure pattern should roughly be:
[Obviously, some of the timing moves up if and when the exploit appears in the wild, but the order is essentially the same.]
Disadvantages:
Itâs fairly clear that there are some people in the security research industry whose main goal is that of self-publicity. These are the show-offs, whether they are publicising their company or their services or just themselves.
For these people the disclosure pattern would be:
Disadvantages:
When all youâre in it for is the money, the answer is clear â you shop around, describing your bugs to Tipping Point and the like, then selling your bug to the highest bidder.
Disadvantages:
Sometimes this isnât so bad â you get the money, and many of the vulnerability buyers will work with vendors to address the bug â all the while, protecting their subset of users with their security tool.
What a noble goal â youâre trying to make it clear to users that they have chosen the wrong vendor.
Here, the disclosure pattern is simple:
Disadvantages:
You may agree or disagree with a lot of what Iâve written above â but if youâre going to publish vulnerability research, you have to deal with the prospect that people will be watching what you post, when you post it, how you post it â and they will infer from that (even if you think you havenât implied anything of the sort) a motive and a personality. What are your posts and your published research going to say about your motives? Is that what you want them to say? Are you going to have to spend your time explaining that this is not really what you intended?
As Tavis is discovering, you can also find it difficult to separate your private vulnerability research from your employer â this is perhaps harder in Tavisâ case to draw the line, since he is apparently employed in the capacity of vulnerability. If your employer is understanding and you have an agreement as to what is personal work and what is work work, thatâs not a big problem â but it can be a significant headache if that has not been addressed ahead of time.
Iâve been trying to improve my IFetch applicationâs overall performance, and itâs clear that the best thing that could be done to improve it immediately is to cache the information being returned from the BBC Radio web site, so that next time around, the application doesnât have to reload all the information from the web, slow as it often is.
My first thought was quite simple â store the RDF (Resource Description Format) XML files in a cache directory â purge them if they havenât been accessed in, say, a month, and only fetch them again from the web if the web page has been updated since the file was last modified.
Sadly, I wrote the code before I discovered that the BBC Radio web site acts as if all these RDF files were modified this last second.
Happily, I wrote the code before I went and looked at the documentation for the HttpWebRequest.IfModifiedSince Property.
Hereâs the sample code (with source colouring and line numbers):
01 // Create a new ‘Uri’ object with the mentioned string.
02 Uri myUri =new Uri("http://www.contoso.com");
03 // Create a new ‘HttpWebRequest’ object with the above ‘Uri’ object.
04 HttpWebRequest myHttpWebRequest= (HttpWebRequest)WebRequest.Create(myUri);
05 // Create a new ‘DateTime’ object.
06 DateTime today= DateTime.Now;
07 if (DateTime.Compare(today,myHttpWebRequest.IfModifiedSince)==0)
08 {
09 // Assign the response object of ‘HttpWebRequest’ to a ‘HttpWebResponse’ variable.
10 HttpWebResponse myHttpWebResponse=(HttpWebResponse)myHttpWebRequest.GetResponse();
11 Console.WriteLine("Response headers \n{0}\n",myHttpWebResponse.Headers);
12 Stream streamResponse=myHttpWebResponse.GetResponseStream();
13 StreamReader streamRead = new StreamReader( streamResponse );
14 Char[] readBuff = new Char[256];
15 int count = streamRead.Read( readBuff, 0, 256 );
16 Console.WriteLine("\nThe contents of Html Page are : \n");
17 while (count > 0)
18 {
19 String outputData = new String(readBuff, 0, count);
20 Console.Write(outputData);
21 count = streamRead.Read(readBuff, 0, 256);
22 }
23 // Close the Stream object.
24 streamResponse.Close();
25 streamRead.Close();
26 // Release the HttpWebResponse Resource.
27 myHttpWebResponse.Close();
28 Console.WriteLine("\nPress ‘Enter’ key to continue……………..");
29 Console.Read();
30 }
31 else
32 {
33 Console.WriteLine("\nThe page has been modified since "+today);
34 }
So, whatâs the problem?
First, thereâs the one that should be fairly obvious â by comparing (in line 7) for equality to âDateTime.Nowâ (assigned in line 6), the programmer has essentially said that this sample is designed to differentiate pages modified after the run from pages modified before the run. Now this will have one of two effects â on a site where the If-Modified-Since request header works properly, all results will demonstrate that the page has not been modified; on a site where If-Modified-Since always returns that the page has been modified, it will of course always state that the page has been modified. That alone makes this not a very useful sample, even if the rest of the code was correct.
But the greater error is that the IfModifiedSince value is a request header, and yet it is being compared against a target date, as if it already contains the value of the pageâs last modification. How would it get that value (at line 7), when the web site isnât actually contacted until the call to GetResponse() in line 10?
Also irritating, in that the .NET Framework makes far too much use of exceptions, is that instead of the NotModified response type being a simple category of response, itâs an exception.
How should this code be changed?
My suggestion is as follows â let me know if Iâve screwed anything else up:
01 // Create a new ‘Uri’ object with the mentioned string.
02 Uri myUri = new Uri("http://www.google.com/intl/en/privacy.html");
03 // Create a new ‘HttpWebRequest‘ object with the above ‘Uri’ object.
04 HttpWebRequest myHttpWebRequest = (HttpWebRequest)WebRequest.Create(myUri);
05 // Create a new ‘DateTime‘ object.
06 DateTime today = DateTime.Now;
07 today=today.AddDays(-21.0); // Test for pages modified in the last three weeks.
08 myHttpWebRequest.IfModifiedSince = today;
09 try
10 {
11 // Assign the response object of ‘HttpWebRequest‘ to a ‘HttpWebResponse‘ variable.
12 HttpWebResponse myHttpWebResponse = (HttpWebResponse)myHttpWebRequest.GetResponse();
13 Console.WriteLine("Page modified recently\nResponse headers \n{0}\n", myHttpWebResponse.Headers);
14 Stream streamResponse = myHttpWebResponse.GetResponseStream();
15 StreamReader streamRead = new StreamReader(streamResponse);
16 Char[] readBuff = new Char[256];
17 int count = streamRead.Read(readBuff, 0, 256);
18 Console.WriteLine("\nThe contents of Html Page are : \n");
19 while (false && count > 0)
20 {
21 String outputData = new String(readBuff, 0, count);
22 Console.Write(outputData);
23 count = streamRead.Read(readBuff, 0, 256);
24 }
25 // Close the Stream object.
26 streamResponse.Close();
27 streamRead.Close();
28 Console.WriteLine("\nPress ‘Enter’ key to continue……………..");
29 Console.Read();
30 // Release the HttpWebResponse Resource.
31 myHttpWebResponse.Close();
32 }
33 catch (System.Net.WebException e)
34 {
35 if (e.Response != null)
36 {
37 if (((HttpWebResponse)e.Response).StatusCode == HttpStatusCode.NotModified)
38 Console.WriteLine("\nThe page has not been modified since " + today);
39 else
40 Console.WriteLine("\nUnexpected status code " + ((HttpWebResponse)e.Response).StatusCode);
41 }
42 else
43 {
44 Console.WriteLine("\nUnexpected Web Exception " + e.Message);
45 }
46 }
So, why are sucky samples relevant to a blog about security?
The original sample doesnât cause any obvious security problems, except for the obvious lack of exception handling.
But other samples I have seen do contain significant security flaws, or at least behaviours that tend to lead to security flaws â not checking the size of buffers, concatenating strings and passing them to SQL, etc. Itâs worth pointing out that while writing this blog post, I couldnât find any such samples at Microsoftâs MSDN site, so theyâve obviously done at least a moderately good job at fixing up samples.
This sample simply doesnât work, though, and that implies that the tech writer responsible didnât actually test it in the two most obvious cases (page modified, page not modified) to see that it works.
Today, I’ve been reminding many people at work that I’ll be out next week for the MVP Summit.
In previous years, the questions I’ve received in response have been mainly about “what’s that?”, “does that mean you work for Microsoft?”, “what are you going to be learning about?” etc.
This year, the questions have moved on to “what kind of stuff do you get from that?”, “are they going to give you a Zune?”, “do you all get a new Windows Phone?” and so on.
While that would certainly be a really cool thing, I think it is worth pointing out that Microsoft’s MVP programme is suffering from the credit crunch just as much as anyone. When I first joined, I can remember the hotel room I stayed in for the MVP Summit was huge – there was a phone in the bathroom, which was necessary because you had to call for a taxi to get to the bed. Now, we’re expected to double up on room occupancy. Previous summits have been in Seattle at the conference centre, this summit is in Bellevue. As has been revealed in numerous places, there’s no concept of “MVP Bucks” that we get to spend each year at the company store any more. Many of the program group dinners are now held in Microsoft cafeterias, nice though they are, rather than in restaurants and bars around the Redmond area.
So, no, I don’t anticipate getting a Zune or a Tablet PC (but wouldn’t it be funny if Steve Jobs were to offer us all iPads?) – though we might hear something about the much rumoured Zune Phone, if it really exists at all, but then we probably would be told to keep it a secret.
What I do anticipate is getting a look into some of the attitudes that are being brought to the design of Windows 8, IE 9, IIS 8, ADFS, etc. With luck, I’ll learn something I can bring back not only to work, but also to readers of my blog, and to the newsgroups I still hang out in. [Occasionally I’ll hit the web forums, but they’re still too painfully slow and cumbersome to read and respond to on a regular basis.]
And that’s well worth the price of admission.
Did I say there’s a price to being an MVP? Yes, there is, and it is that you help the community of Microsoft customers. Because it’s a retrospective award, and the criteria is based on something like “conspicuously more than others in the field”, it’s not really something you can evaluate ahead of time – and true to that, most of the MVPs would “pay that price” even in the absence of an MVP programme. It’s just that with membership of the programme, it’s a little easier to give the right advice.
Hidden by the smoke and noise of thirteen (13! count them!) security bulletins, with updates for 26 vulnerabilities and a further 4 third-party ActiveX Killbits (software that other companies have asked Microsoft to kill because of security flaws), we find the following, a mere security advisory:
Microsoft Security Advisory (977377): Vulnerability in TLS/SSL Could Allow Spoofing
Itâs been a long time coming, this workaround â which disables TLS / SSL renegotiation in Windows, not just IIS.
Disabling renegotiation in IIS is pretty easy â you simply disable client certificates or mutual authentication on the web server. This patch gives you the ability to disable renegotiation system-wide, even in the case where the renegotiation youâre disabling is on the client side. I canât imagine for the moment why you might need that, but when deploying fixes for symmetrical behaviour, itâs best to control it using switches that work in either direction.
The long-term fix is yet to arrive â and thatâs the creation and implementation of a new renegotiation method that takes into account the traffic that has gone on before.
To my mind, even this is a bit of a concession to bad design of HTTPS, in that HTTPS causes a âTOC/TOUâ (Time-of-check/Time-of-use) vulnerability, by not recognising that correct use of TLS/SSL requires authentication and then resource request, rather than the other way around. But thatâs a debate that has enough clever adherents on both sides to render any argument futile.
Suffice it to say that this can be fixed most easily by tightening up renegotiation at the TLS layer, and so thatâs where it will be fixed.
Iâll fall back to my standard answer to all questions: it depends.
If your servers do not use client auth / mutual auth, you donât need this patch. Your server simply isnât going to accept a renegotiation request.
If your servers do use client authentication / mutual authentication, you can either apply this patch, or you can set the earlier available SSLAlwaysNegoClientCert setting to require client authentication to occur on initial connection to the web server.
One or other of these methods â the patch, or the SSLAlwaysNegoClientCert setting â will work for your application, unless your application strictly requires renegotiation in order to perform client auth. In that case, go change your application, and point them to documentation of the attack, so that they can see the extent of the problem.
Be sure to read the accompanying KB article to find out not only how to turn on or off the feature to disable renegotiation, but also to see which apps are, or may be, affected adversely by this change â to date, DirectAccess, Exchange ActiveSync, IIS and IE.
I would have to say that on the speed front, I would have liked to see Microsoft make this change far quicker. Disabling TLS/SSL renegotiation should not be a huge amount of code, and while it has some repercussions, and will impact some applications, as long as the change did not cause instability, there may be some institutions who would want to disable renegotiation lock, stock and barrel in a hurry out of a heightened sense of fear.
Iâm usually the first to defend Microsoftâs perceived slowness to patch, on the basis that they do a really good job of testing the fixes, but for this, I have to wonder if Microsoft wasnât a little over-cautious.
While I have no quibbles with the bulletin, there are a couple of statements in the MSRC blog entry that I would have to disagree with:
IIS 6, IIS 7, IIS 7.5 not affected in default configuration
Customers using Internet Information Services (IIS) 6, 7 or 7.5 are not affected in their default configuration. These versions of IIS do not support client-initiated renegotiation, and will also not perform a server-initiated renegotiation. If there is no renegotiation, the vulnerability does not exist. The only situation in which these versions of the IIS web server are affected is when the server is configured for certificate-based mutual authentication, which is not a common setting.
Well, of course â in the default setting on most Windows systems, IIS is not installed, so itâs not vulnerable.
Thatâs clearly not what they meant.
Did they mean âthe default configuration with IIS installed and turned on, with a certificate installedâ?
Clearly, but thatâs hardly âthe default configurationâ. It may not even be the most commonly used configuration for IIS, as many sites escape without needing to use certificates.
Sadly, if I add âand mutual authentication enabledâ, weâre only one checkbox away from the âdefault configurationâ to which this article refers, and weâre suddenly into vulnerable territory.
In other words, if you require client / mutual authentication, then the default configuration of IIS that will achieve that is vulnerable, and you have to make a decided change to non-default configuration (the SSLAlwaysNegoClientCert setting), in order to remain non-vulnerable without the 977377 patch.
The other concern I have is over the language in the section âLikelihood of the vulnerability being exploited in general caseâ, which discusses only the original CSRF-like behaviour exploited under the initial reports of this problem.
There are other ways to exploit this, some of which require a little asinine behaviour on the part of the administrator, and others of which are quite surprisingly efficient. I was particularly struck by the ability to redirect a client, and make it appear that the server is the one doing the redirection.
I think that Eric and Maarten understate the likelihood of exploit â and they do not sufficiently emphasise that the chief reason this wonât be exploited is that it requires a MITM (Man-in-the-middle) attack to have already successfully taken place without being noticed. Thatâs not trivial or common â although there are numerous viruses and bots that achieve it in a number of ways.
Itâs a little unclear on first reading the advisory whether this affects just IIS or all TLS/SSL users on the affected system. Iâve asked if this can be addressed, and Iâm hoping to see the advisory change in the coming days.
Iâve rambled on for long enough â the point here is that if youâre worried about SSL / TLS client certificate renegotiation issues that Iâve reported about in posts 1, 2 and 3 of my series, by all means download and try this patch.
Be warned that it may kill behaviour your application relies upon â if that is the case, then sorry, youâll have to wait until TLS is fixed, and then drag your server and your clients up to date with that fix.
The release of this advisory is by no means the end of the story for this vulnerability â there will eventually be a supported and tested protocol fix, which will probably also be a mere advisory, followed by updates and eventually a gradual move to switch to the new TLS versions that will support this change.
This isnât a world-busting change, but it should demonstrate adequately that changes to encryption protocols are not something that can happen overnight â or even in a few short months.
The Rule: Performance optimizations are not worth making for anything less than 10% improvement in speed.
Corollary: Performance optimizations must be measured before and after, and changes reverted if they do not cause significant performance improvement.
Converse: If you are pushing back on implementing a feature âbecause it will make the app unbearably slowâ, particularly if that feature is deemed a security requirement, you had better be able to demonstrate that loss in performance, and it had better be significant.
I come about this insight through years of work as a developer, in which Iâve seen far too many mistakes introduced through people either being âcleverâ or taking shortcuts â and the chief reason given for both of these behaviours is that the developer was âtrying to make the program fasterâ (or leaner, or smaller).
I have also seen developers disable security features, or insist that they shouldnât implement security measures (SSL is a common example) because they âwill make the application slowerâ.
I have yet to see a project complete a proper implementation of SSL for security that significantly slowed the application. In many cases, the performance testing that was done to ensure that SSL had no significant effect demonstrated that the bottleneck was already somewhere else.
Last week, I went to Microsoftâs TechFest as part of their âPublic Dayâ. This is the first time MVPs as a group have been invited to this event, and although itâs clear we missed some of the demonstrations that are not public-ready, this is something that I hope can be extended to us in future, even if only to Washington-state MVPs
For general news links on MS TechFest 2009, you can search news.google.com for âTechFestâ. Hereâs a couple of samples:
http://www.king5.com/video/index.html?nvid=335707 â I didnât see these guys there.
http://www.guardian.co.uk/technology/blog/2009/feb/25/microsoft-software – I bumped into this guy.
I also saw Chris Pirillo there from LockerGnome and Chris.Pirillo, but he hasnât written anything yet. I only mention him because itâs about time that I thanked him for being one of the earliest online writers (they were called âe-Zinesâ back then, apparently) to mention WFTPD in his column. Sadly, I donât have a copy to remember what it is that he said :(
Apologies to anyone who expected to reach me by email that day â the usual computers spread around the Microsoft Conference Centre for email and web browsing were missing, possibly because the Press were there, and theyâll steal anything that isnât nailed down, before coming back with crowbars.
So, hereâs some description of the things I saw, ranging from the exciting and relevant to the âwhy is Microsoft spending money on that?â [Note that this is not meant to be disrespectful of âpure researchâ â often, todayâs âuseless meanderingsâ become tomorrows product â WFTPD itself started from a momentary âhow hard can it really be?â lapse in my own judgement, followed by a little research and a lot of effort.]
I snapped this picture last week at Microsoft’ Researchâs Tech-Fest event.
Microsoft always makes the visiting MVPs feel welcome at Global Summit time, when all MVP awardees are invited to visit Microsoftâs campus, and engage in face-to-face conversations with various Microsoft Product Groups about the feedback theyâre seeing from the users they talk to in their various forums, whether thatâs Usenet newsgroups, web forums, user groups, or book and magazine readers.
This year, in large part thanks to the efforts of one of the other Security MVPs, Dana Epps, we have a fantastic schedule of in-depth sessions on identity frameworks, threat modeling, Microsoftâs internal security, and a number of other topics that I should perhaps keep quiet about.
The other benefit to me, as an MVP, from these sessions is that I get to network with other MVPs â all of whom are intelligent, driven individuals with expertise in a wide variety of fields, not just my own area of Enterprise Security.
Already Iâve spoken to a number of people in conversations that I intend to continue long after the Summit is over. Iâve made some new friends, met plenty of old friends, and expanded and strengthened existing social connections.
Itâs a little sad that the worsening economic climate has caused a number of MVPs from outside the US to not attend this yearâs Summit, and even some from inside the country. But it does appear that the MVP programme is still strong, as around 1500 MVPs from around the world are in attendance.
For those wondering about the swag bag, we got a cloth bag, stickers, a pen, and a water bottle. The shirts will be arriving on Wednesday (thank you, US Customs!). The benefit is more in the programme of technical sessions than the bag, unlike some technical conferences, where your $2500 entrance fee gets you a rather spectacular bag of âfreebiesâ and a number of sessions scheduled such that all the ones you want to see are in the same time slot.
I have to say, I love the stickers. Being a part of the MVP programme is a really nice thing that Microsoft does to say âthank youâ to people who are assisting Microsoftâs customers in newsgroups, user groups, etc, and who would continue to do so anyway, even if Microsoft ended the MVP programme. As such, I think itâs an excellent recognition, and Iâm proud of the fact that I was awarded â so I like to show it off, mainly by plastering stickers on my various technology items like laptops and PDAs.