Category Archives: Alun’s code

Apple’s “goto fail” SSL issue–how do you avoid it?

Context – Apple releases security fix; everyone sees what they fixed

 

Last week, Apple released a security update for iOS, indicating that the vulnerability being fixed is one that allows SSL / TLS connections to continue even though the server should not be authenticated. This is how they described it:

Impact: An attacker with a privileged network position may capture or modify data in sessions protected by SSL/TLS

Description: Secure Transport failed to validate the authenticity of the connection. This issue was addressed by restoring missing validation steps.

Secure Transport is their library for handling SSL / TLS, meaning that the bulk of applications written for these platforms would not adequately validate the authenticity of servers to which they are connected.

Ignore “An attacker with a privileged network position” – this is the very definition of a Man-in-the-Middle (MITM) attacker, and whereas we used to be more blasé about this in the past, when networking was done with wires, now that much of our use is wireless (possibly ALL in the case of iOS), the MITM attacker can easily insert themselves in the privileged position on the network.

The other reason to ignore that terminology is that SSL / TLS takes as its core assumption that it is protecting against exactly such a MITM. By using SSL / TLS in your service, you are noting that there is a significant risk that an attacker has assumed just such a privileged network position.

Also note that “failed to validate the authenticity of the connection” means “allowed the attacker to attack you through an encrypted channel which you believed to be secure”. If the attacker can force your authentication to incorrectly succeed, you believe you are talking to the right server, and you open an encrypted channel to the attacker. That attacker can then open an encrypted channel to the server to which you meant to connect, and echo your information straight on to the server, so you get the same behaviour you expect, but the attacker can see everything that goes on between you and your server, and modify whatever parts of that communication they choose.

So this lack of authentication is essentially a complete failure of your secure connection.

As always happens when a patch is released, within hours (minutes?) of the release, the patch has been reverse engineered, and others are offering their description of the changes made, and how they might have come about.

In this case, the reverse engineering was made easier by the availability of open source copies of the source code in use. Note that this is not an intimation that open source is, in this case, any less secure than closed source, because the patches can be reverse engineered quickly – but it does give us a better insight into exactly the code as it’s seen by Apple’s developers.

Here’s the code:

    if ((err = ReadyHash(&SSLHashSHA1, &hashCtx)) != 0)
        goto fail;
    if ((err = SSLHashSHA1.update(&hashCtx, &clientRandom)) != 0)
        goto fail;
    if ((err = SSLHashSHA1.update(&hashCtx, &serverRandom)) != 0)
        goto fail;
    if ((err = SSLHashSHA1.update(&hashCtx, &signedParams)) != 0)
        goto fail;
        goto fail;
    if ((err = SSLHashSHA1.final(&hashCtx, &hashOut)) != 0)
        goto fail;


Yes, that’s a second “goto fail”, which means that the last “if” never gets called, and the failure case is always executed. Because of the condition before it, however, the ‘fail’ label gets executed with ‘err’ set to 0.



Initial reaction – lots of haha, and suggestions of finger pointing



So, of course, the Internet being what it is, the first reaction is to laugh at the clowns who made such a simple mistake, that looks so obvious.



T-shirts are printed with “goto fail; goto fail;” on them. Nearly 200 have been sold already (not for me – I don’t generally wear black t-shirts).



But really, these are smart guys – “be smarter” is not the answer



This is SSL code. You don’t get let loose on SSL code unless you’re pretty smart to begin with. You don’t get to work as a developer at Apple on SSL code unless you’re very smart.



Clearly “be smart” is already in evidence.



There is a possibility that this is too much in evidence – that the arrogance of those with experience and a track record may have led these guys to avoid some standard protective measures. The evidence certainly fits that view, but then many developers start with that perspective anyway, so in the spirit of working with the developers you have, rather than the ones you theorise might be possible, let’s see how to address this issue long term:



Here’s my suggested answers – what are yours?



Enforce indentation in your IDE / check-in process



OK, so it’s considered macho to not rely on an IDE. I’ve never understood that. It’s rather like saying how much you prefer pounding nails in with your bare fists, because it demonstrates how much more of a man you are than the guy with a hammer. It doesn’t make sense when you compare how fast the job gets done, or the silly and obvious errors that turn up clearly when the IDE handles your indenting, colouring, and style for you.



Yes, colouring. I know, colour-blind people exist – and those people should adjust the colours in the IDE so that they make sense. Even a colour-blind person can get shade information to help them. I know syntax colouring often helps me spot when an XSS injection is just about ready to work, when I would otherwise have missed it in all the surrounding garbage of HTML code. The same is true when building code, you can spot when keywords are being interpreted as values, when string delimiters are accidentally unescaped, etc.



The same is true for indentation. Indentation, when it’s caused by your IDE based on parsing your code, rather than by yourself pounding the space bar, is a valuable indication of program flow. If your indentation doesn’t match control flow, it’s because you aren’t enforcing indentation with an automated tool.



What the heck, enforce all kinds of style



Your IDE and your check-in process are a great place to enforce style standards to ensure that code is not confusing to the other developers on your team – or to yourself.



A little secret – one of the reasons I’m in this country in the first place is that I sent an eight-page fax to my bosses in the US, criticising their programming style and blaming (rightly) a number of bugs on the use of poor and inconsistent coding standards. This was true two decades ago using Fortran, and it’s true today in any number of different languages.



The style that was missed in this case – put braces around all your conditionally-executed statements.



I have other style recommendations that have worked for me in the past – meaningful variable names, enforced indenting, maximum level of indenting, comment guidelines, constant-on-the-left of comparisons, don’t include comparisons and assignments in the same line, one line does one thing, etc, etc.



Make sure you back the style requirements with statements as to what you are trying to do with the style recommendation. “Make the code look the same across the team” is a good enough reason, but “prevent incorrect flow” is better.



Make sure your compiler warns on unreachable code



gcc has the option “-Wunreachable-code”.



gcc disabled the option in 2010.



gcc silently disabled the option, because they didn’t want anyone’s build to fail.



This is not (IMHO) a smart choice. If someone has a warning enabled, and has enabled the setting to produce a fatal error on warnings, they WANT their build to fail if that warning is triggered, and they WANT to know when that warning can no longer be relied upon.



So, without a warning on unreachable code, you’re basically screwed when it comes to control flow going where you don’t want it to.



Compile with warnings set to fatal errors



And of course there’s the trouble that’s caused when you have dozens and dozens of warnings, so warnings are ignored. Don’t get into this state – every warning is a place where the compiler is confused enough by your code that it doesn’t know whether you intended to do that bad thing.



Let me stress – if you have a warning, you have confused the compiler.



This is a bad thing.



You can individually silence warnings (with much comments in your code, please!) if you are truly in need of a confusing operation, but for the most part, it’s a great saving on your code cleanliness and clarity if you address the warnings in a smart and simple fashion.



Don’t over-optimise or over-clean your code



The compiler has an optimiser.



It’s really good at its job.



It’s better than you are at optimising code, unless you’re going to get more than a 10-20% improvement in speed.



Making code shorter in its source form does not make it run faster. It may make it harder to read. For instance, this is a perfectly workable form of strstr:



const char * strstr(const char *s1, const char *s2)
{
  return (!s1||!s2||!*s2)?s1:((!*s1)?0:((*s1==*s2&&s1==strstr(s1+1,s2+1)-1)?s1:strstr(s1+1,s2)));
}



Can you tell me if it has any bugs in it?



What’s its memory usage? Processor usage? How would you change it to make it work on case-insensitive comparisons? Does it overflow buffers?



Better still: does it compile to smaller or more performant code, if you rewrite it so that an entry-level developer can understand how it works?



Now go and read the implementation from your CRT. It’s much clearer, isn’t it?



Release / announce patches when your customers can patch



Releasing the patch on Friday for iOS and on Tuesday for OS X may have actually been the correct move – but it brings home the point that you should release patches when you maximise the payoff between having your customers patch the issue and having your attackers reverse engineer it and build attacks.



Make your security announcements findable



Where is the security announcement at Apple? I go to apple.com and search for “iOS 7.0.6 security update”, and I get nothing. It’d be really nice to find the bulletin right there. If it’s easier to find your documentation from outside your web site than from inside, you have a bad search engine.



Finally, a personal note



People who know me may have the impression that I hate Apple. It’s a little more nuanced than that.



I accept that other people love their Apple devices. In many ways, I can understand why.



I have previously owned Apple devices – and I have tried desperately to love them, and to find why other people are so devoted to them. I have failed. My attempts at devotion are unrequited, and the device stubbornly avoids helping me do anything useful.



Instead of a MacBook Pro, I now use a ThinkPad. Instead of an iPad (remember, I won one for free!), I now use a Surface 2.



I feel like Steve Jobs turned to me and quoted Dr Frank N Furter: “I didn’t make him for you.”



So, no, I don’t like Apple products FOR ME. I’m fine if other people want to use them.



This article is simply about a really quick and easy example of how simple faults cause major errors, and what you can do, even as an experienced developer, to prevent them from happening to you.

Deploying on the road…

Now that I have a Surface 2, I’m going to leave my laptop at home when I travel.

This leaves me with a concern – obviously, I’m going to play with some of my hobby software development while I have “down time”, but the devices for which I’m building are traveling with me, while the dev machine stays at home.

That’s OK where I’m building for the laptop, because it’s available by Remote Desktop through a Remote Desktop Gateway.

Deploying to my other devices – the Windows Phone and the Surface 2 running Windows RT – is something that I typically do by direct connection, or on the local network.

Windows Phone

For the Windows Phone, there’s a Store called “Beta” as opposed to “Public”, into which you can deploy your app, make it available to specific listed users, and this will allow you to quickly distribute an app remotely to your device.

Details on how to do this are here.

Windows Store

The story on Windows Store apps appears, at first blush, to be far more dismal, with numerous questions online asking “is there a beta store for Windows like there is for the phone?”

The answer comes back “no, but that’s a great idea for future development”.

But it is completely possible to distribute app packages to your Windows RT and other Windows 8.1 devices, using Powershell.

The instructions at MSDN, here, will tell you quite clearly how you can do this.

Useful Excel Macros #1–compare two columns

I often need to compare two columns, and get a list in a third column of the items that are in one column, but not the other.

Every solution I find online has one common problem – the third column is full of blanks in between the items. I don’t want blanks. I want items.

So I wrote this function, which returns an array of the missing items – items which are in the first parameter, but not in the second.

I’m probably missing a trick or two (I’m particularly not happy with the extra element in the array that has to be deleted before the end), so please feel free to add to this in the comments.

Public Function Missing(ByRef l_ As Range, ByRef r_ As Range) As Variant()
' Returns a list of the items which are in l_ but not in r_
' Note that you need to put this formula into a range of cells as an array formula.
' So select a range, then type =Missing($A:$A,$B:$B), and press Ctrl-Shift-Enter
' If the range is too big, you'll get lots of N/A cells
Dim i As Long ' loop through l_
Dim l_value As Variant ' current value in l_
Dim y() As Variant ' Temp array to store values found
ReDim y(0)

For i = 1 To l_.Count ' Loop through input

  l_value = l_.Cells(i, 1) ' Get current value
  
  If Len(l_value) = 0 Then ' Exit when current value is empty
    GoTo exitloop
  End If

  If r_.Find(l_value) Is Nothing Then ' Can't find current value => add it to the missing
    ReDim Preserve y(UBound(y) + 1) ' Change array size
    y(UBound(y) - 1) = l_value ' Add current value to end
  End If
Next i
exitloop:
If UBound(y) < 1 Then
  Return
End If
ReDim Preserve y(UBound(y) - 1)
If Application.Caller.Rows.Count > 1 Then ' If we were called from a vertical selection
  Missing = Application.Transpose(y) ' Transpose the array to a vertical mode.
Else
  Missing = y ' otherwise just return the array horizontally.
End If
End Function



.csharpcode, .csharpcode pre
{
font-size: small;
color: black;
font-family: consolas, “Courier New”, courier, monospace;
background-color: #ffffff;
/*white-space: pre;*/
}
.csharpcode pre { margin: 0em; }
.csharpcode .rem { color: #008000; }
.csharpcode .kwrd { color: #0000ff; }
.csharpcode .str { color: #006080; }
.csharpcode .op { color: #0000c0; }
.csharpcode .preproc { color: #cc6633; }
.csharpcode .asp { background-color: #ffff00; }
.csharpcode .html { color: #800000; }
.csharpcode .attr { color: #ff0000; }
.csharpcode .alt
{
background-color: #f4f4f4;
width: 100%;
margin: 0em;
}
.csharpcode .lnum { color: #606060; }

Credential Provider update–Windows 8 SDK breaks a few things…

You’ll recall that back in February of 2011, I wrote an article on implementing your first Credential Provider for Windows 7 / 8 / Server 2008 R2 / Server 2012 – and it’s been a fairly successful post on my blog.

Just recently, I received a report from one of my users that my version of this was no longer wrapping the password provider on Windows Server 2008 R2.

As you’ll remember from that earlier article, it’s a little difficult (but far from impossible) to debug your virtual machine to get information out of the credential provider while it runs.

Just not getting called

Nothing seemed to be obviously wrong, the setup was still executing the same way, but the code just wasn’t getting called. For the longest time I couldn’t figure it out.

Finally, I took a look at the registry entries.

My code was installing itself to wrap the password provider with CLSID “{60b78e88-ead8-445c-9cfd-0b87f74ea6cd}”, but the password provider in Windows Server 2008 R2 appeared to have CLSID “{6f45dc1e-5384-457a-bc13-2cd81b0d28ed}”. Subtle, to be sure, but obviously different.

I couldn’t figure out immediately why this was happening, but I eventually traced back through the header files where CLSID_PasswordCredentialProvider was defined, and found the following:

   1: EXTERN_C const CLSID CLSID_PasswordCredentialProvider;


   2:  


   3: #ifdef __cplusplus


   4:  


   5: class DECLSPEC_UUID("60b78e88-ead8-445c-9cfd-0b87f74ea6cd") 


   6: PasswordCredentialProvider; 


   7: #endif


   8:  


   9: EXTERN_C const CLSID CLSID_V1PasswordCredentialProvider;


  10:  


  11: #ifdef __cplusplus


  12:  


  13: class DECLSPEC_UUID("6f45dc1e-5384-457a-bc13-2cd81b0d28ed") 


  14: V1PasswordCredentialProvider; 


  15: #endif 


  16:  

As you can see, in addition to CLSID_PasswordCredentialProvider, there’s a new entry, CLSID_V1PasswordCredentialProvider, and it’s this that points to the class ID that Windows Server 2008 R2 uses for its password credential provider – and which I should have been wrapping with my code.

The explanation is obvious

It’s clear what happened here with a little research. For goodness-only-knows-what-unannounced-reason, Microsoft chose to change the class ID of the password credential provider in Windows 8 and Windows Server 2012. And, to make sure that old code would continue to work in Windows 8 with just a recompile, of course they made sure that the OLD name “CLSID_PasswordCredentialProvider” would point to the NEW class ID value. And, as a sop to those of us supporting old platforms, they gave us a NEW name “CLSID_V1PasswordCredentialProvider” to point to the OLD class ID value.

And then they told nobody, and included it in Visual Studio 2012 and the Windows 8 SDK.

In fact, if you go searching for CLSID_V1PasswordCredentialProvider, you’ll find there’s zero documentation on the web at all. That’s pretty much unacceptable behaviour, introducing a significantly breaking change like this without documentation.

So, how to support both values?

Supporting both values requires you to try and load each class in turn, and save details indicating which one you’ve loaded. I went for this rather simple code in SetUsageScenario:

   1: IUnknown *pUnknown = NULL;


   2: _pWrappedCLSID = CLSID_PasswordCredentialProvider;


   3: hr = ::CoCreateInstance(CLSID_PasswordCredentialProvider, NULL, CLSCTX_ALL, IID_PPV_ARGS(&pUnknown));


   4: if (hr == REGDB_E_CLASSNOTREG)


   5: {


   6:     _pWrappedCLSID = CLSID_V1PasswordCredentialProvider;


   7:     hr = ::CoCreateInstance(CLSID_V1PasswordCredentialProvider, NULL, CLSCTX_ALL, IID_PPV_ARGS(&pUnknown));


   8: }

Pretty bone-dead simple, I hope you’ll agree – the best code often is.

Of course, if you’re filtering on credential providers, and hope to hide the password provider, you’ll want to filter both providers there, too. Again, here’s my simple code for that in Filter:

   1: if (IsEqualGUID(rgclsidProviders[i], CLSID_PasswordCredentialProvider))


   2:     rgbAllow[i]=FALSE;


   3: if (IsEqualGUID(rgclsidProviders[i], CLSID_V1PasswordCredentialProvider))


   4:     rgbAllow[i]=FALSE;



If that wasn’t nasty enough…



Ironically, impacting the Windows XP version of the same package (which uses a WinLogon Notification Provider, instead of a Credential Provider), another thing that the Windows 8 SDK and Visual Studio 2012 did for me is that it disabled the execution of my code on Windows XP.



This time, they did actually say something about it, though, which allowed me to trace and fix the problem just a little bit more quickly.



The actual blog post (not official documentation, just a blog post) that describes this change is here:



Windows XP Targeting with C++ in Visual Studio 2012



What this blog indicates is that a deliberate step was taken to disable Windows XP support in executables generated by Visual Studio 2012. You have to go back and make changes to your projects in order to continue supporting Windows XP.



That’s not perhaps so bad, because really, Windows XP is pretty darn old. In fact, in a year from now it’ll be leaving its support lifecycle, and heading into “Extended Support”, where you have to pay several thousand dollars for every patch you want to download. I’d upgrade to Windows 7 now, if I were you.

Removing capabilities from my first Windows Phone app

So, I thought I’d write a Windows Phone app using Visual Studio 2012 the other day. Just a simple little thing, to help me solve my son’s algebra homework without getting into the same binds he does (failure to copy correctly, fumbled arithmetic, you know the thing…)


And I run into my first problem.


The app uses no phone capabilities worth advertising – you know, things like the choice to track your location, so that the app’s install will ask the user “do you want to allow this app to have access to your location”, and you say either “allow”, or “why the hell does a flashlight application need to know where I am?”


And yet, when I run the “Automated Tests” under the Store Test Kit, I get the following:


image


If you can’t read the image, or you’re searching for this in Google, I’ll tell you that it wants me to know that it’s validated all the capabilities I’m using, and has noticed that I’m using ID_CAP_MEDIALIB and ID_CAP_NETWORKING.


Weird, because I don’t do any networking, and I don’t access any of the phone user’s media.


It’s just my son and me using the app right now, but I can picture some paranoid person wondering why I need access to their media library or networking simply so I can solve the occasional simultaneous or quadratic equation!


Quite frankly, I was mystified, too. Did a bit of searching across the interWebs, but all the articles I found said the same thing – the MediaLib capability must be because you’re using something with the word “Radio” or “Media” in it somewhere (I’m not), and the Networking capability because you’re doing something across the network. I removed all the “using System.Net” lines from all of my code files, but still no joy.


[A quick tip: to find all these rules yourself, look in C:\Program Files (x86)\Microsoft SDKs\Windows Phone\v7.1\Tools\Marketplace for the file “rules.xml”, which will tell you what the capability detection code is looking for]


Nothing in my own code seemed to be actually causing this, so I took a step back and had a look at other references being included by the compiler by default.


System.Net seemed to be an obvious offender, so I removed that, to no effect (quite right, too, because it isn’t the offender, and doesn’t, on its own, cause ID_CAP_NETWORKING to be detected).


No, here’s the culprit:


image


Microsoft.Expression.Interactions – what on earth is that doing there?


It’s not something I remember including, and quite honestly, when I went looking for it, I’m disappointed to find that it’s associated with Expression Blend, not something I’ve actually used EVER. [Maybe I should, but that’s a topic for another time].


Removing this reference and rebuilding, the XAP tests clear of all capabilities. Which is nice.


So, now I have my “Big Algebra” app in beta test, and it doesn’t tell the user that it’s going to need their media library or their network connection – because it’s not going to need them!

XSS Hipster loved Scriptless XSS before it was cool

I was surprised last night and throughout today, to see that a topic of major excitement at the Microsoft BlueHat Security Conference was that of “Scriptless XSS”.

The paper presented on the topic certainly repeats the word “novel” a few times, but I will note that if you do a Google or Bing search for “Scriptless XSS”, the first result in each case is, of course, a simple blog post from yours truly, a little over two years ago, in July 2010.

As the article notes, this isn’t even the first time I’d used the idea that XSS (Cross Site Scripting) is a complete misnomer, and that “HTML injection” is a more appropriate description. JavaScript – the “Scripting” in most people’s explanations of Cross Site Scripting – is definitely not required, and is only used because it is an alarmingly clear demonstration that something inappropriate is happening.

Every interview in the security field – every one!

Every time I have had an interview in the security field – that’s since 2006 – I’ve been asked “Explain what Cross Site Scripting is”, and rather hesitantly at first, but with growing surety, I have answered that it is simply “HTML injection”, and the conversation goes wonderfully from there.

Why did I come up with this idea?

Fairly simply, I’ve found that if you throw standard XSS exploits at developers and tell them to fix the flaw, they do silly things like blocking the word “script”. As I’ve pointed out before, Cross Site Scripting (as with all injection attacks) requires an Injection (how the attacker provides their data), an Escape (how the attacker’s data moves from data into code), an Attack or Execution (the payload), and optionally a Cleanup (returning the user’s browser state to normal so they don’t notice the attack happening).

It’s not the script, stupid, it’s the escape.

Attacks are wide and varied – the paper on Scriptless Attacks makes that clear, by presenting a number of novel (to me, at least) attacks using CSS (Cascading Style Sheet) syntax to exfiltrate data by measuring scrollbars. My example attack used nothing so outlandish – just the closure of one form, and the opening of another, with enough CSS attribute monkeying to make it look like the same form. The exfiltration of data in this case is by means of the rather pedestrian method of having the user type their password into a form field and submit it to a malicious site. No messing around with CSS to measure scrollbars and determine the size of fonts.

Hats off to these guys, though.

I will say this – the attacks they present are an interesting and entertaining demonstration that if you’re trying to block the Attack or Cleanup phases of an Injection Attack, you have already failed, you just don’t know it yet. Clearly a lot of work and new study went into these attacks, but it’s rather odd that their demonstrations are about the more complicated end of Scriptless XSS, rather than about the idea that defenders still aren’t aware of how best to defend.

Also, no doubt, they had the sense to submit a paper on this – all I did was blog about it, and provide a pedestrian example with no flashiness to it at all.

Hipster gets no respect.

So, yeah, I was talking about XSS without the S, long before it was cool to do so. As my son informs me, that makes me the XSS Hipster. It’d be gratifying to my ego to get a little nod for that (heck, I don’t even get an invite to BlueHat), but quite frankly rather than feeling all pissed off about that, I’m actually rather pleased that people are working to get the message out that JavaScript isn’t the problem, at least when it comes to XSS.

The problem is the Injection and the Escape – you can block the Injection by either not accepting data, or by having a tight whitelist of good values; and you can block the Escape by appropriately encoding all characters not definitively known to be safe.

Multiple CA0053 errors with Visual Studio 11 Beta

I hate it when the Internet doesn’t know the answer – and doesn’t even have the question – to a problem I’m experiencing.

Because it was released during the MVP Summit, I was able to download the Visual Studio 11 Beta and run it on a VS2010 project.

There’s no “conversion wizard”, which bodes well, because it suggests that I will be able to use this project in either environment (Visual Studio 2010 or the new VS11 beta) without any problems. And certainly, the project I selected to try worked just fine in Visual Studio 11 and when I switched back to Visual Studio 2010.

Unfortunately, one of the things that I noticed when building my project is that the code analysis phase crapped out with fourteen instances of the CA0053 error:

imageAs you can see, this is all about being unable to load rule assemblies from the previous version of Visual Studio – and is more than likely related to me installing the x64 version of Visual Studio 11 Beta, which therefore can’t load the 32-bit (x86) DLLs from Visual Studio 2010.

Curiously this problem only exists on one of the projects in my multi-project solution, and of course I couldn’t find anywhere in the user interface to reset this path.

I thought for a moment I had hit on something when I checked the project’s options, and found the Code Analysis tab, but it didn’t seem to matter what I did to change the rule set, there was no place to select the path to that rule set.

Then I decided to go searching for the path in the source tree.

There it was, in the project’s “.csproj” file – two entries in the XML file, CodeAnalysisRuleSetDirectories and CodeAnalysisRuleDirectories. These consisted of the simple text:

<CodeAnalysisRuleSetDirectories>;C:\Program Files (x86)\Microsoft Visual Studio 10.0\Team Tools\Static Analysis Tools\\Rule Sets</CodeAnalysisRuleSetDirectories>

<CodeAnalysisRuleDirectories>;C:\Program Files (x86)\Microsoft Visual Studio 10.0\Team Tools\Static Analysis Tools\FxCop\\Rules</CodeAnalysisRuleDirectories>

As you can imagine, I wouldn’t normally suggest editing files by hand that the interface normally takes care of for you, but it’s clear that in this case, the interface wasn’t helping.

So, I just closed all currently open copies of Visual Studio (all versions), and edited the file in notepad. I kept the entries themselves, but deleted the paths:

<CodeAnalysisRuleSetDirectories></CodeAnalysisRuleSetDirectories>

<CodeAnalysisRuleDirectories></CodeAnalysisRuleDirectories>

Errors gone; problem solved.

You’re welcome, Internet.

MVP news

My MVP award expires on March 31

So, I’ve submitted my information for re-awarding as an MVP – we’ll see whether I’ve done enough this year to warrant being admitted again into the MVP ranks.

MVP Summit

Next week is the MVP Summit, where I visit Microsoft in Bellevue and Redmond for a week of brainwashing and meet-n-greet. I joke about this being a bit of a junket, but in reality, I get more information out of this than from most of the other conferences I’ve attended – perhaps mostly because the content is so tightly targeted.

That’s not always the case, of course – sometimes you’re scheduled to hear a talk that you’ve already heard three different times this year, but for those occasions, my advice would be to find another one that’s going on at the same time that you do want to hear. Talk to other MVPs not in your speciality, and find out what they’re attending. If you feel like you really want to get approval, ask your MVP lead if it’s OK to switch to the other session.

Very rarely a talk will be so strictly NDA-related that you will be blocked from entering, but not often.

Oh, and trade swag with other MVPs. Very frequently your fellow MVPs will be willing to trade swag that they got for their speciality for yours – or across regions. Make friends and talk to people – and don’t assume that the ‘industry luminaries’ aren’t willing to talk to you.

Featured TechNet Wiki article

Also this week, comes news that I’ve been recognised for authoring the TechNet Wiki article of the Week, for my post on Microsoft’s excellent Elevation of Privilege Threat Modeling card game. Since that post was made two years ago, I’ve used the deck in a number of environments and with a few different game styles, but the goal each time has remained the same, and been successfully met – to make developers think about the threats that their application designs are subject to, without having to have those developers be security experts or have any significant experience of security issues.

What else I did at Black Hat / DefCon–the Core DataMatrix Contest

Black Hat, and its associated sideshow, DefCon, consists of a number of different components. Training, Briefings, Exhibition and Contests, all make up part of Black Hat, and DefCon is a looser collection of Workshops, Events, Parties, Talks, Villages, Contests and numerous other things besides(*).

Perhaps the thing that gave me the most fun this year was the contest that I entered at Black Hat and at DefCon. The contest was run by Core Labs, a part of Core Security Technologies, and featured the theme of reverse engineering.

Reverse Engineering is the skill of looking at someone else’s code – in source code or binary form – and figuring out what the code does, and more importantly, how best to make it do what you want. This often involves exceeding the original design specifications – which is perhaps the simplest and most inclusive definition of “hacking”.

In the DataMatrix contest, the code (or at least, a portion of it) is given to you in source form, in C#. You are told that this code is running as part of a server, and you are given access to the server in the form of two webcams and an output screen. The output screen displays a score sheet, the views from each webcam, and a ‘debug’ output window. I’ve lost the link to the Black Hat version of the code, but here’s the DefCon code.

The webcams are the only form of input to the server that are available to the contestants. Each contestant is given a DataMatrix containing their activation code. This is a bitmap (kind of like a two-dimensional barcode) with some “registration” values around the edge, and squares either black or white in the middle.

And that’s it – that’s all the help you get.

But then, that’s probably all the help you’ll need.

The first challenges

The first challenges are relatively easy. First, you activate your userid by showing the webcam your initial card, and then you see there’s a function called “process_activate” – that sounds like it’s the function that was used to activate your card.

It’s fairly simple to see that this must use the single byte command (in the “cmd” variable) “1”, along with your two byte userid and four byte password, to register you in the system as an active user. It also increases a user-specific value, “score”. To make this easy to understand, we’ll call this “scoring a point”.

Then you see a function “process_free” – from the code, this is clearly a free point. All you need is a command “10”, and your userid, to score a point.

Another function, “process_pieceofcake”, is almost as easy. Command 11, and your userid, plus another four bytes which are simply the two’s-complement of your userid. Easy. In fact, in the Black Hat version, this was even easier, if I remember correctly, but I don’t have the code handy.

“process_name” is clearly one to call early on, because it gets you the bragging rights of putting your own name in the high score table. Plus, it gives you five points more. Pretty good, huh? By now you should have eight points.

Some more interesting challenges

“process_regalo” took my interest next, since it talks about a “gift_list”. Regalo is, apparently, Spanish for “gift”. This one’s strange, because the process has some activity even when the command code isn’t the code expected.

So, I took a look at what that path does. Checks four bytes for the user’s password, and if the “data_regalos” value for this user is less than 10, increments it, and then assigns an extra point to a randomly selected member of the gift list.

Having figured that out, I realised that the quicker I get on the gift list, the quicker I start racking up the points. So, I solved the little coding conundrum (did you figure that one out yourself?) in the other path of process_regalo, and added myself to the gift list.

Five times.

Yeah, five times – did you spot that in the code?

“process_fabe” and “process_fabe13” – those were a little harder. You have to not only crack an MD5 hash (not difficult, but hard), but in the “fabe13” case, figure out what the appropriate “encode” is for the “decode” function. [ROT13, if you didn’t get it]

“process_enqueue” – nasty, this one sends a message to an email address at mailinator.com that you have to figure out for yourself. I still haven’t figured it out. So, I also haven’t got the points from “process_claimMessage”.

“process_sync” was one function where I knew I had an advantage. It requires the use of a .NET Random function, and because I spend a fair amount of my development time in .NET, I knew that I could use my own system to figure out what times the sync function was expecting me at. Occasionally, the webcams weren’t reading my cards quickly enough, but that’s OK. I didn’t necessarily need a whole lot of those points.

Ladies and Gentlemen, we have a winner!

So, as you’ve probably guessed by now, using these functions I managed to rack up quite a number of points, and as it happened, I conquered the Black Hat competition. 60 points to me, 27 to my nearest opponent.

As a result of this, I am now the proud owner of an iPad. Yes, I know, all those things I’ve always said about Apple, and here I am, walking away from a competition with an iPad 2. The irony is almost unbearable. I’ll tell you later what I think of the iPad.

Then comes DefCon

DefCon started out much the same – I was streaking ahead of the competition, largely because the contest was better attended, and I’d already got my foot into the gift_list early on.

Then I saw the part of the server code that was new – it allowed you to write a limited form of program to execute on the server, that would randomly add points to your score. I entered that, and sure enough, I got a pile of points very quickly – about twice as many as I had at Black Hat.

I thought that meant I was going to win the prize.

Sadly, I hadn’t taken into consideration that this was DefCon. The people there are sometimes more devious (though there are also an awful lot of wannabes).

Sure enough, two of my competitors executed the portion of code that allowed them to dump out the list of executing code, as well as to remove the code sample I had submitted. That way, they could copy my code in order to give themselves points, and remove my ability to add points.

In a way, I almost felt like this was kind of cheating – what, they couldn’t write their own code? But, realistically, this was simply a part of the challenge – if I had been as good at reverse engineering as I felt I was, and a little less cocky, I would have spotted this functionality and taken advantage of the means with which to prevent it.

As it was, I came in third, and won a t-shirt. But the joy of winning the Black Hat contest is still something I’m proud of, and grateful to Core for letting me play their games.

If .NET is so good, why can’t I…?

OK, so don’t get me wrong – there are lots of things I like about .NET:

  • I like that there are five main programming styles to choose from (C#, Visual Basic, F#, PowerShell and C++)
  • I like that it’s so quick and easy to write nice-looking programs
  • I like the Code Access Security model (although most places aren’t disciplined enough to use it)
  • I like the lack of remote code execution through buffer overflow

And up until recently, when all I was really doing was reviewing other people’s .NET code, my complaints were relatively few:

  • Everything’s a bloody exception – even normal occurrences.
    • Trying to open a file and finding it not there is hardly an exceptional circumstance, for instance.
    • Similarly, trying to convert a text string to a number – that frequently fails, particularly in my line of work.
    • Exception-oriented programming in general gets taken to extremes, and this can lead to poor performance and/or unstable code, as programmers are inclined either to catch everything, or accept code that throws an exception at the slightest nod.
  • It’s pretty much only available on the one platform (yes, Mono, but really – are you going to pitch your project as running under Mono on Linux?)
  • You can’t search for .NET specific answers in a generic way.
    • “Java” occurs sufficiently infrequently in any programming context other than the Java platform / language, so you can search for “string endswith java” and be pretty sure that you’re looking at the Java string.endsWith member, rather than any other.
    • I’ve taken to searching for “C#” along with my search queries, because the hash is less likely to be discarded by search engines than the dot in “.NET”, but it’s still not all that handy.

But now that I’ve started trying to write .NET code of my own, I’m noticing that there are some really large, and really irritating, gaps.

  • Shell properties in Windows – file titles, description, comments, thumbnails, etc.
    • While there are a couple of additional helpers, such as the Windows API Code Pack, they still cause major headaches, and require the inclusion of another framework that is not maintained by usual update procedures.
    • Even with the Windows API Code Pack, there’s no access to the System.Thumbnail or System.ThumbnailStream properties (and presumably others)
  • Handling audio files – I was hoping to do some work on a sound file analyser, to determine if two allegedly-similar half-hour recordings are truly of the same event, and maybe to figure out which one is likely to be the better. I didn’t find any good libraries for FFT. Maybe this was because you just can’t search for .NET-specific FFT libraries.
  • Marshalling pointers in structs.
    • So many structures consist of lists of pointers to memory contents, it would be really nice to create this sort of a structure in a simple byte array, with offsets instead of pointers, and have the marshalling functions convert the offsets into pointers (and back again when necessary).
    • Better still, of course, would be to have these structures use offsets or counts instead of pointers, but hey, you have to support the existing versions.

So now I have to grapple between whether I want to write my applications in .NET and miss out on some of the power that I may want later in the app’s development, or carry on with native C++, take the potential security hit, but know what my code is doing from one moment to the next.

What other irritating gaps have you seen – or have you found great ways to fill those gaps?