Ten reasons Dr J wants to go to Amazon.
10. Tired of working at a desk, wants to work at a door on breeze blocks instead.
9. Commute to Downtown Seattle is better than commute to London, New Zealand, Japan, Barcelona, Amazon Basin, Tsurinam, Atlantis, etc.
8. Tired of dealing with all those damn MVPs and their incessant fawning.
7. Feels the need to do security, rather than just travel the world talking about it.
6. Techies don’t appreciate being labeled “marketing”.
4. After giving the same talk for the last three years, surely the industry has gotten the point by now, or is beyond help.
3. Wants to hack Amazon’s system to improve the sales of his books.
2. Microsoft is no place for a family man [but is Amazon really all that much better?]
1. Office closer to the water; water means scuba.
[Other friends of mine say “good luck” in their own ways:
Sandi “Spyware Sucks” Hardmeier
Susan “E-Bitz” Bradley
Joe “Joeware” Ware]
So, why do you think he left?
So, I’m beta testing Outlook 2007, and it’s got some really pretty “ribbons” that indicate that they’ve gone to great lengths to improve the user interface.
Today, I’m creating a distribution list from a number of people that have emailed me.
This should be easy.
Here we go…
Create a new distribution list, give it a name, and then drag the messages into it – it’s smart enough to know what I’m trying to do, right?
Wrong. No drop operation gets performed.
Okay, so maybe if I open the message up, I can drag the individual sender’s address into the distribution list, yes?
What about clicking “Add Member”, and then dragging the sender’s address into the Add Member dialog? That should work, right?
Worse still, the “Add Member” dialog is application modal – I cannot even change focus to the message to read the email address that I’m typing.
Right now, I’m resorting to dragging the address from the message into wordpad, then copying and pasting portions of the address into the appropriate fields in the “Add Member” dialog.
And I have to do this for a couple of dozen messages.
The ribbons are all very pretty, but really, Microsoft, “pretty” should always take second place to “usable”. [Yes, my own interfaces aren’t exactly pretty, but I like to think that they are usable – your constructive criticism is very welcome!]
Outlook has always given me the impression that the team that writes it – or at least, the team that designs it – has not spent a good deal of time actually using it.
I may have commented before that it’s common practice within Microsoft to apologise for missing meetings by saying that “Outlook ate the meeting invite”.
[Yes, commenting on beta software is probably technically in breach of some NDA, but the useless behaviour I’m talking about here was useless before, and it looks like being useless for some time to come.]
When was the last time you restored from your backups?
If you answer “never” (or even anything approaching “quite some time ago”), your backups might well be completely useless, for all you know.
Without testing the restore procedure once in a while, your backup process is a waste of time. Literally – you’re replacing one random risk (the chance that something will destroy your data) with another (the chance that something will destroy your data, and you’ve been backing up only the zeroes(*)) that’s only slightly better.
When you need to restore from a backup, that’s the worst time to find out that you can’t restore from a backup.
(*) Now, watch, as someone links to this article and tells the world that I’m advocating cutting your backup time and space in half by only backing up the ones. What I’m really advocating is that you backup your zeroes to one tape, and your ones to another.
This message (“Insufficient resources exist to complete the API”, along with an event log event ID 26 from “Application Popup”) has been popping up on my laptop from time to time, along with the rather troublesome issue that the machine refuses to hibernate. I had it set up so that I could close the lid, and the laptop would stand by for a few minutes, then hibernate.
Suddenly, it seems, the laptop is unable to hibernate, and only this confusing message appears on the screen.
Realisation dawned this morning that I had recently installed an extra 512MB of memory into my laptop.
Of course there are no system resources for hibernating, because my hiberfil.sys space is 512MB smaller, having been created with the previous size of memory.
The solution is simple – open the power properties (right-click the battery / power icon in the notification area, and select “Adjust Power Properties”), click the Hibernate tab, and then uncheck the “Enable Hibernation” box.
Click “Apply”, to delete hiberfil.sys, then re-check the “Enable Hibernation” box, and click “Apply” or “OK” to recreate it.
So, if your computer cannot hibernate, and you get this message, try disabling and re-enabling the Hibernation feature, and don’t think it’s as random-sounding a process as other articles make it out to be. If you’ve changed your memory size, you need to change your hibernation file’s size.
Just in case this doesn’t fix you right up, there is another issue that it might be – a hotfix is available for Windows XP SP2, XP Tablet Edition 2005, or XP Media Center Edition 2005; there’s also an earlier hotfix for Windows XP, for a different issue on multi-proc machines.
I was reminded last night, that there are always going to be some constructs that your static analysis tools won’t save you from. [A point made by Microsoft’s Michael Howard, in his blog and in his new book on the Secure Development LifeStyle… er… LightCycle… er… LifeCycle]
For instance, here’s a piece of code:
int main(int argc, char **argv)
Yep, that’s a really short program that makes for a buffer overflow.
And yet, “cl /analyze” won’t complain, except to tell you that strcpy is deprecated.
So, what do you do?
The right first step, of course, is to replace strcpy – as a deprecated function, it’s kind of dangerous.
So, let’s say we replace strcpy with strcpy_s, and here’s the output I get from running “cl /analyze”:
Microsoft (R) 32-bit C/C++ Optimizing Compiler Version 14.00.50727.42 for 80×86
Copyright (C) Microsoft Corporation. All rights reserved.
Microsoft (R) Incremental Linker Version 8.00.50727.42
Copyright (C) Microsoft Corporation. All rights reserved.
So, clearly, we’re still not detecting the overflow.
“Isn’t strcpy_s safe? Do we care that it’s not detecting the overflow?”
Sure, it’s safe – but it really depends on what you call ‘safe’. Using strcpy_s like that will simply kill your process right there, with an exception:
test.exe – Application Error
The exception unknown software exception (0xc000000d)
occurred in the application at location 0x0040108f.
Click on OK to terminate the program
Click on CANCEL to debug the program
[So, you did know you can press Ctrl-C in most dialogs to get the text of the message in the clipboard, right? Try it next time you need to report a dialog box error – much better than sending a graphic!]
But if you ship an application that does this, you’ll get a reputation for shipping crap.
What to do? I’ve known some authors who handle this by catching, and then ignoring, the exception, in somewhere like their main loop.
Quite simply, that’s the wrong thing to do – it may even be the worst thing to do. Why? Because the exception you ignore may be the test packet that gets thrown at you prior to the successful exploit. You definitely want to fail on unexpected exceptions – and a buffer overflow, even when it’s detected, should not be expected.
But, you say, this exception is expected, and will be handled in some other way – say, by telling the user that he’s entered a bad value, and requesting his input again.
Well then, it’s not an exception, and you shouldn’t use a function that makes an exception out of a commonplace occurrence. Look instead to StringCchCopy, which will return an error result when you overflow. Handle the error result correctly, and you avoid the mess and overhead of exception handling. Or, use strncpy_s with the “_TRUNCATE” parameter for the count value, and get a similar kind of handling.
In Programmer Hubris Part 1, I described that frequently I'd come across applications that impinge on my consciousness far more than is justified by my infrequent use of them.
I expressed it rather simply as "I'm just not that into you". You, the developer, may believe that your app is the most important thing in the world – to me, it's something I use once every six months, and not because I want to. Don't give me even more reason to remove your application.
Maybe Microsoft gets this, because they've just released "Windows Principles: Twelve Tenets to Promote Competition".
Some of the highlights relevant to this topic:
1. Installation of any software
2. Easy access
4. Exclusive promotion of non-Microsoft programs
I also like:
9. No exclusivity
10. Communications protocols
Go and read the document – I hope to hold Microsoft to these principles in the years to come.
“Defence in depth” (or “defense in depth”, if you’re American) is a frequently misunderstood term in security.
It refers to designing your software with the assumption that layers above you that were supposed to protect you have failed to do so – in whatever manner is most inconvenient to your application.
As Steve Riley points out, it’s not the same as simply applying the same measure at a couple of different places – it’s about assuming that the measure above you failed.
An example is “my firewall restricts external traffic from reaching me” – that’s a first layer of defence. The second layer of defence might be “my application requires a user-name and password”. It’s defence in depth, because even if an attacker can fake traffic through your firewall, he’ll have to come up with a password that works.
I’m starting to think about laptop encryption as being “defence in death”.
It’s long been a statement in computer security that “if the attacker has physical access, it’s ‘game over'”.
That’s true – if you’re talking about a system that provides a service – as usual, you have to talk about what you are securing.
Your server rooms are generally susceptible to a guy with a chainsaw – physical access means loss of service; ergo, security problem. You fix this problem with strong physical security.
Your servers, if they can be stolen, are susceptible to being cracked open by hackers who want to pull the data from them; ergo, security problem. You fix this with strong physical security (plus an appropriate hardware retirement procedure that includes degaussing the disks, shredding them, and lightly sprinkling them with thermite).
Your laptops can be stolen even more easily, and can be similarly opened up to hackers who want to read their data. Again, this is a security problem.
You can’t solve it with physical security.
In fact, with security designs for laptops, you pretty much have to start with the assumption that physical security is impossible – and what can software security do for you, if the hacker can simply prevent your software from running?
This is where “defence in death” comes about – by making the system only usable while it is alive and running, by encrypting it with a key that is not stored locally, you make it functionally impossible to use or read the system unti you have brought the system to life.
And while the system is alive, it can actively protect itself.
Encryption is a lovely thing. Be careful to understand how you use it.
Wow – yesterday, you could download “Microsoft Private Folders” (if you were attested as Genuine) from Microsoft’s downloads site.
Today, it’s gone.
There’s a brief synopsis of the story at the Seattle P-I’s site here – as usual, I’m patient enough to wait while you go and read it.
As a security engineer at a company that cares to manage its domain environment, I’m very comfortable with the argument that it’s not something our users should be installing it – but it’s a service, and our users are not local admins, so they can’t install a new service.
What bothers me, though, is the argument that this is dangerous because “It also didn’t offer a way to retrieve a forgotten password, raising the possibility of effectively losing access to files if people forgot the phrase they chose.”
People, this is encryption.
That’s what it’s supposed to do.
You encrypt data that you would rather lose than leak.
You want to lose the data if it falls into the hands of people who don’t know the password, even if that means you.
If you can’t handle that, then encryption is not what you want – you want “protection”, or “concealment”, where there’s a back-door for people with powerful tools, a little training and some time.
I’m reminded again, this weekend, that many companies engage in security practices that are, at best, inconvenient to their customers, and at worst, a poor attempt at security.
As an example, consider my son’s use of his computer.
Every so often, he’ll damage or break a CD of one of his favourite games.
OK, for most people, this would simply be a learning experience.
But my son’s autistic. A broken favourite CD is cause for an absolute screaming, inconsolable meltdown that will last for hours, and will cause recurrences throughout the week.
So we adapted – we make copies of every CD-ROM, and we work from the backup.
When a disk gets damaged, it’s the copy, so we can make another copy from the original, and there’s no loss.
But there’s always that bugbear of “copy protection”. We first encountered it with a Thomas the Tank Engine game. Seriously, Thomas the Tank Engine needs copy protection?
Today, it was Frogger 2. This game is so old, it’s for Windows 95 and 98.
So, there’s no chance to replace it, and without the ability to copy it ahead of time, this was the original disk that was shattered.
Did I ever mention how much I hate copy protection, and how stupid I think it is?
Pirates in Singapore (or pick some other country – Italy, whatever) simply make a bit-for-bit copy, and the copy protection doesn’t even give them pause.
Real users at home, on the other hand, are unable to make backup copies for their own use.
Once again, I am reminded that I didn’t buy the game – I bought the plastic CD, and the game just happened to be on it. When I break the CD, I have no right to the game.
That just plain sucks.
I always like to ask questions that make everyone answer immediately with what they are sure is the right answer, and then tell them that they haven’t thought it through.
The title of this post is one such question. The answer is “yes”, right?
Sometimes, yes, but sometimes, no.
Let’s think about it a little.
The obvious vulnerability related to a denial-of-service is when you’re trying to provide a service to numerous users, and an outage will cost you (money, usually).
But what about a browser denial-of-service?
If I visit some hacker’s web site, and it closes my browser, what happens, really?
Unless you’re particularly hard of thinking, you simply don’t visit that web site again.
Yes, you have to go further into that “it closes my browser” mention, because that might just be a null-pointer dereference, which just stops the browser cold, or it might be an exploitable buffer overflow that you can only exploit occasionally.
But if it’s really just a denial-of-service – and the only thing it does is to stop or close the browser – it’s not really a security issue. It’s a pain, and a reminder not to visit that site again, but it’s not a threat to your security, and you can wait to apply that patch.
Am I wrong?