The Atlantic today published a reminder that the Associated Press has declared in their style guide as of today that the word “Internet” will be spelt with a lowercase ‘i’ rather than an uppercase ‘I’.
The title is “Elegy for the Capital-I Internet”, but manages to be neither elegy nor eulogy, and misses the mark entirely, focusing as it does on the awe-inspiring size of the Internet being why the upper-case initial was important; then moving to describe how its sheer ubiquity should lead us to associating it with a lower-case i.
The "Internet", capital I, gives the information that this is the only one of its kind, anywhere, ever. There is only one Internet. A lower-case I would indicate that there are several "internets". And, sure enough, there are several lower-class networks-of-networks (which is the definition of “internet” as a lower-case noun).
I’d like to inform the people who are engaging in this navel-gazing debate over big-I or small-i, that there functionally is only exactly one Internet. When their cable company came to "install the Internet", there was no question on the form to say "which internet do you want to connect to?" and people would have been rightly upset if there had been.
So, from that perspective, very much capital-I is still the right term for the Internet. There’s only one. Those other smaller internets are not comparable to “the Internet”.
From a technical perspective, we’re actually at the time when it’s closest to being true that there’s two internets. We’re in the midst of the long, long switch from IPv4 to IPv6. We’ve never done that before. And, while there are components of each that will talk to the other, it’s possible to describe the IPv6 and IPv4 collections of networks as two different "internets". So, maybe small-i is appropriate, but for none of the reasons this article describes.
Having said that, IPv6 engineers work really really hard to make sure that users just plain don’t notice that there’s a second internet while they’re building it, and it just feels exactly like it would if there was still only one Internet.
Again, you come back to "there is only one Internet", you don’t get to check a box that selects which of several internets you are going to connect to, it’s not like "the cloud", where there are multiple options. You are either connected to the one Internet, or you’re not connected to any internet at all.
Capital I, and bollocks to the argument from the associated press – lower-cased, because it’s not really that big or important, and neither is the atlantic. So, with their own arguments (which I believe are fallacious anyway), I don’t see why they deserve an upper-case initial.
The Atlantic, on the other hand – that’s huge and I wouldn’t want to cross it under my own steam.
And the Internet, different from many other internets, deserves its capital I as a designation of its singular nature. Because it’s a proper noun.
Depending on the kind of company you work at, it’s either:
If you work at the first place, reach out to me on LinkedIn – I know some people who might want to work with you.
If you’re at the third place, you should probably get out now. Whatever they’re paying you, or however much the stock might be worth come the IPO, it’s not worth the pain and suffering.
If you’re at the second place, congratulations – you’re at a regular, ordinary workplace that could do with a little better management.
A surprisingly great deal.
Whenever there’s a security incident, there should be an investigation as to its cause.
Clearly the cause is always human error. Machines don’t make mistakes, they act in predictable ways – even when they are acting randomly, they can be stochastically modeled, and errors taken into consideration. Your computer behaves like a predictable machine, but at various levels it actually routinely behaves like it’s rolling dice, and there are mechanisms in place to bias those random results towards the predictable answers you expect from it.
Humans, not so much.
Humans make all the mistakes. They choose to continue using parts that are likely to break, because they are past their supported lifecycle; they choose to implement only part of a security mechanism; they forget to finish implementing functionality; they fail to understand the problem at hand; etc, etc.
It always comes back to human error.
Occasionally I will experience these great flashes of inspiration from observing behaviour, and these flashes dramatically affect my way of doing things.
One such was when I attended the weekly incident review board meetings at my employer of the time – a health insurance company.
Once each incident had been resolved and addressed, they were submitted to the incident review board for discussion, so that the company could learn from the cause of the problem, and make sure similar problems were forestalled in future.
These weren’t just security incidents, they could be system outages, problems with power supplies, really anything that wasn’t quickly fixed as part of normal process.
But the principles I learned there apply just as well to security incident.
The biggest principle I learned was “root cause analysis” – that you look beyond the immediate cause of a problem to find what actually caused it in the long view.
At other companies, who can’t bear to think that they didn’t invent absolutely everything, this is termed differently, for instance, “the five whys” (suggesting if you ask “why did that happen?” five times, you’ll get to the root cause). Other names are possible, but the majority of the English-speaking world knows it as ‘root cause analysis’
This is where I learned that if you believe the answer is that a single human’s error caused the problem, you don’t have the root cause.
Whenever I discuss this with friends, they always say “But! What about this example, or that?”
You should always ask those questions.
Here’s some possible individual causes, and some of their associated actual causes:
|Bob pulled the wrong lever||Who trained Bob about the levers to pull? Was there documentation? Were the levers labeled? Did anyone assess Bob’s ability to identify the right lever to pull by testing him with scenarios?|
|Kate was evil and did a bad thing||Why was Kate allowed to have unsupervised access? Where was the monitoring? Did we hire Kate? Why didn’t the background check identify the evil?|
|Jeremy told everyone the wrong information||Was Jeremy given the right information? Why was Jeremy able to interpret the information from right to wrong? Should this information have been automatically communicated without going through a Jeremy? Was Jeremy trained in how to transmute information? Why did nobody receiving the information verify it?|
|Grace left her laptop in a taxi||Why does Grace have data that we care about losing – on her laptop? Can we disable the laptop remotely? Why does she even have a laptop? What is our general solution for people, who will be people, leaving laptops in a taxi?|
|Jane wrote the algorithm with a bug in it||Who reviews Jane’s code? Who tests the code? Is the test automated? Was Jane given adequate training and resources to write the algorithm in the first place? Is this her first time writing an algorithm – did she need help? Who hired Jane for that position – what process did they follow?|
I could go on and on, and I usually do, but it’s important to remember that if you ever find yourself blaming an individual and saying “human error caused this fault”, it’s important to remember that humans, just like machines, are random and only stochastically predictable, and if you want to get predictable results, you have to have a framework that brings that randomness and unpredictability into some form of logical operation.
Many of the questions I asked above are also going to end up with the blame apparently being assigned to an individual – that’s just a sign that it needs to keep going until you find an organisational fix. Because if all you do is fix individuals, and you hire new individuals and lose old individuals, your organisation itself will never improve.
[Yes, for the pedants, your organisation is made up of individuals, and any organisational fix is embodied in those individuals – so blog about how the organisation can train individuals to make sure that organisational learning is passed on.]
Finally, if you’d like to not use Ubuntu as my “circle of blame” logo, there’s plenty of others out there – for instance, Microsoft Alumni:
Tomorrow is April 1, also known as April Fools’ Day.
As a result, you should expect that anything I say on this blog is fabrication, fantasy, foolery and snark.
Apparently, this hasn’t previously been completely stupidly blindly obvious.
Here’s some of the things I expect to happen this year as a result of the leap year:
And then there’s the ordinary issues with dates that programmers can’t understand – like the fact that there are more than 52 weeks in a year. “ASSERT(weeknum>0 && weeknum<53);”, anyone? 52 weeks is 364 days, and every year has more days than that. [Pedantic mathematical note – maybe this somewhat offsets the “employer’s extra day” item above]
Happy Leap Day – and always remember to test your code in your head as well as in real life, to find its extreme input cases and associated behaviours. They’ll get tested anyway, but you don’t want it to be your users who find the bugs.
Version control is one of those vital tools for developers that everyone has to use but very few people actually enjoy or understand.
So, it’s with no surprise that I noted a few months ago that the version control software on which I’ve relied for several years for my personal projects, Component Software’s CS-RCS, has not been built on in years, and cannot now be downloaded from its source site. [Hence no link from this blog]
I’ve used git before a few times in professional projects while I was working at Amazon, but relatively reluctantly – it has incredibly baroque and meaningless command-line options, and gives the impression that it was written by people who expected their users to be just as proficient with the ins and outs of version control as they are.
While I think it’s a great idea for developers to build software they would use themselves, I think it’s important to make sure that the software you build is also accessible by people who aren’t the same level of expertise as yourself. After all, if your users were as capable as the developer, they would already have built the solution for themselves, so your greater user-base comes from accommodating novices to experts with simple points of entry and levels of improved mastery.
git, along with many other open source, community-supported tools, doesn’t really accommodate the novice.
As such, it means that most people who use it rely on “cookbooks” of sets of instructions. “If you want to do X, type commands Y and Z” – without an emphasis on understanding why you’re doing this.
This leads inexorably to a feeling that you’re setting yourself up for a later fall, when you decide you want to do an advanced task, but discover that a decision you’ve made early on has prevented you from doing the advanced task in the way you want.
That’s why I’ve been reluctant to switch to git.
But it’s clear that git is the way forward in the tools I’m most familiar with – Visual Studio and its surrounding set of developer applications.
It’s one of those decisions I’ve made some time ago, but not enacted until now, because I had no idea how to start – properly. Every git repository I’ve worked with so far has either been set up by someone else, or set up by me, based on a cookbook, for a new project, and in a git environment that’s managed by someone else. I don’t even know if those terms, repository and environment, are the right terms for the things I mean.
There are a number of advanced things I want to do from the very first – particularly, I want to bring my code from the old version control system, along with its history where possible, into the new system.
And I have a feeling that this requires I understand the decisions I make when setting this up.
So, it was with much excitement that I saw a link to this arrive in my email:
Next thing is I’m going to watch this, and see how I’m supposed to work with git. I’ll let you know how it goes.
I happened upon a blog post by the Office team yesterday which surprised me, because it talked about a feature in PowerPoint that I’ve wanted ever since I first got my Surface 2.
Here’s a link to documentation on how to use this feature in PowerPoint.
It seems like the obvious feature a tablet should have.
Here’s a video of me using it to draw a few random shapes:
But not just in PowerPoint – this should be in Word, in OneNote, in Paint, and pretty much any app that accepts ink.
So here’s the blog post from Office noting that this feature will finally be available for OneNote in November.
On iPad, iPhone and Windows 10. Which I presume means it’ll only be on the Windows Store / Metro / Modern / Immersive version of OneNote.
That’s disappointing, because it should really be in every Office app. Hell, I’d update from Office 2013 tomorrow if this was a feature in Office 2016!
Please, Microsoft, don’t stop at the Windows Store version of OneNote.
Shape recognition, along with handwriting recognition (which is apparently also hard), should be a natural part of my use of the Surface Pen. It should work the same across multiple apps.
That’s only going to happen if it’s present in multiple apps, and is a documented API which developers – of desktop apps as well as Store apps – can call into.
Well, desktop apps can definitely get that.
I’ll admit that I haven’t had the time yet to build my own sample, but I’m hoping that this still works – there’s an API called “Ink Analysis”, which is exactly how you would achieve this in your app:
It allows you to analyse ink you’ve captured, and decide if it’s text or a drawing, and if it’s a drawing, what kind of drawing it might be.
[I’ve marked this with the tag “Alun’s Code” because I want to write a sample eventually that demonstrates this function.]
TL;DR – hardware problems, resuming NCSAM posts when / if I can get time.
Well, that went about as well as can be expected.
I start a month of daily posts, and the first one is all that I achieved.
Perhaps I’ve run out of readers, because nobody asked if I was unwell or had died.
No, I haven’t died, the simple truth is that a combination of hardware failures and beta testing got the better of me.
I’d signed up to the Fast Ring of Windows Insider testing, and had found that Edge and Internet Explorer both seemed to get tired of running Twitter and Facebook, and repeatedly got slower and slower to refresh, until eventually I had to quit and restart them.
Also the SP3 refused to recognise my Microsoft Band as plugged in [actually a hardware failure on the Band, but I’ll come to that another day].
Naturally, I assumed this was all because of the beta build I was using.
So, I did what any good beta tester would do. I filed feedback, and pressed the “Roll back” button.
It didn’t seem to take as long as I expected.
That’s your first sign that something is seriously wrong, and you should take a backup of whatever data is left.
So I did, which is nice, because the next thing that happened is that I tried to open a Windows Store app.
It opened a window and closed immediately.
So did every other Windows Store / Metro / Modern / Immersive app I tried.
Including Windows Store itself.
After a couple of days of dinking around with various ‘solutions’, I decided I’d reached beta death stage, and should FFR (FDISK, FORMAT and Reinstall).
First, make another backup, just because you can’t have too many copies of the data you rely on.
That should have been close to the end of the story, with me simply reinstalling all my apps and moving along.
In fact, I started that.
Then my keyboard stopped working. It didn’t even light up.
Plugging the keyboard (it’s a Surface Pro Type Cover) into another Surface (the Surface Pro Type Covers work on, but don’t properly cover, a Surface 2, which we have in my house) demonstrated that the keyboard was just fine on a different system, just not on my main system.
So, I kept a few things running by using my Bluetooth keyboard and mouse, and once I convinced myself it was worth the trip, I took my Surface Pro 3 out to the Microsoft Store in Bellevue for an appointment.
Jeannie was the tech assigned to help me with my keyboard issue. Helpful and friendly, she didn’t waste time with unnecessary questions or dinking around with stuff that could already be ruled out.
She unplugged the keyboard and tried it on another system. It worked. No need to replace the keyboard.
Can she do a factory reset?
Be my guest – I made another backup before I came out to the store.
So, another quick round of FFR, and the Surface still doesn’t recognise the keyboard.
Definitely a hardware problem, and that’s the advantage of going to the Microsoft Store.
Let me get you a replacement SP3, says Jeannie, and heads out back to the stock room.
Bad news, she says on coming back, We don’t have the exact model you have (an i7 Surface Pro 3 with 256 GB of storage).
Is it OK if we get you the next model up, with twice the storage?
Only if you can’t find any way to upgrade me for free to the shiny Surface Book you have on display up front.
Many thanks to Jeannie for negotiating that upgrade!
But now I have to reinstall all my apps, restore all my data, and get back to functioning before I can engage in fun things like blogging.
I’ll get back into the NCSAM posts, but they’ll be more overview than I was originally planning.
I’ve updated from Windows 8.1 to Windows 10 Enterprise Insider Preview over this weekend, on my Surface Pro 3 and a Lenovo tablet. Both machines are used for software development as well as playing games, so seemed the ideal place to practice.
So here’s some initial impressions:
I’ve mentioned before (ranted, perhaps) about how the VPN support in Windows 8.1 is great for desktop apps, but broken for Metro / Modern / Immersive / Windows Store apps.
Still, maybe now I’m able to provide feedback, and Windows is in a beta test phase, perhaps they’ll pay attention and fix the bugs.
It’s a beta, but just in case you were persuaded to install this on a production system, it’s still not release quality.
Every so often, the Edge browser (currently calling itself “Project Spartan”) will just die on you.
I’ve managed to get the “People Hub” to start exactly twice without crashing immediately.
Download the most recent version from the Insider’s page, and you still have to apply an update to the entire system before you’re actually up to date. The update takes essentially as long as the initial install.
Hey, it’s a beta – what did you expect?
Things will break, you’ll find yourself missing functionality, so you may need to restore to your original state. Update before you install, and fewer things will be as likely to go wrong in the upgrade.
They won’t fix things you don’t provide feedback about.
OK, so maybe they also won’t fix things that you DO provide feedback on, but that’s how life works. Not everything gets fixed. Ever.
But if you don’t report issues, you won’t ever see them fixed.
The People “Hub” in Windows 10, from the couple of times I’ve managed to execute it, basically has my contacts, and can display what’s new from them in Outlook Mail.
I rather enjoy the Windows 8.1 People Hub, where you can see in one place the most recent interactions in Twitter, Facebook, LinkedIn and Skype. Or at least, that’s what it promises, even if it only actually delivers Facebook and Twitter.
It’s always possible to delete a video file, of course, but in Windows 8.1, after you’ve finished watching a video from the Videos app, you had to go find some other tool in which to do so – and hope that you deleted the right one.
In Windows 10 you can use the context menu (right click, or tap and hold) on a video to delete it from your store.
Still needs some more work – it doesn’t display subtitles / closed-captioning, it only orders alphabetically, and there’s no jumping to the letter “Q” by pressing the “Q” key, but this app is already looking very functional even for those of us who collect MP4 files to watch.
I really, really liked the Media Center. More than TiVo. We have several Media Center PCs in our house, and now we have to figure out what we’re going to do. I’m not going back to having a made-for-purpose device that can’t do computing, I want my Media Center. I’ll try some of its competitors, but it’d be really nice if Microsoft relents and puts support back for Media Center.
Excellent HTML5 compatibility, reduced chance of being hit by third party vulnerabilities, F12 Developer Tools, and still allows me to test for XSS vulnerabilities if I choose to do so.
Pretty much what I want in a browser, although from a security standpoint, the choice to allow two third party
vulnerabilities add-ins into the browser, Flash and Reader, seems to be begging future trouble.
Having said that, you can disable Adobe Flash in the Advanced Settings of your Spartan browser. I’m going to recommend that you do that on all your non-gaming machines. Then find out which of your web sites need it, and either fix them, or decide whether you can balance the threat of Flash with the utility of that service.
The F12 Developer Tools continue to be a very useful set of web site debugging tools, and assist me greatly in discovering and expanding on web site vulnerabilities. I personally find them easier than debugging tools in other browsers, and they have the benefit of being always installed in recent Microsoft browsers.
The “Reader” view is a nice feature, although it was present in Windows 8.1, and should be used any time you want to actually read the contents of a story, rather than wade through adverts and constant resizing of other content around the text you’re actually interested in.
Because, you know, I’m all about the XSS.
Internet Explorer has a pretty assertive XSS filter built in, and even when you turn it off in your settings, it still comes back to prevent you. I find this to be tricky, because I sometimes need to convince developers of the vulnerabilities in their apps. Firefox is often helpful here, because it has NO filters, but sometimes the behaviour I’m trying to show is specific to Internet Explorer.
Particularly, if I type a quote character into the URL in Internet Explorer, it sends a quote character. Firefox will send a %22 or %27 (double or single quotes). So, sometimes IE will trigger behaviour that Firefox doesn’t.
Sadly, although Spartan does seem to still be useful for XSS testing, the XSS filter can’t be specifically turned off in settings. I’d love to see if I can find a secret setting for this.
Windows has needed a PDF printer since, oh, Windows 3.1. A print driver that prompts you for a file name, and saves whatever you’re printing as a PDF file.
With Office, this kind of existed with Save as PDF. With OneNote, you could Print to OneNote, open the View ribbon, and hide the header, before exporting as a PDF. But that’s the long way around.
With Windows 10, Microsoft installed a new printer driver, “Microsoft Print to PDF”. It does what it says on the tin, allowing you to generate PDFs from anywhere that can print.
I use a Surface Pro 3 as my main system, and I have to say that the reversion to a mainly desktop model of operations is nice to my eyes, but a little confusing to the hands – I don’t quite know how to manage things any more.
Sometimes I like to work without the keyboard, because the tablet works well that way. But now I can’t close apps by sliding from top to bottom, even when I’ve expanded them to full screen. Not sure how I’m supposed to do this.
Yesterday’s unexpected notice from Micro$oft that I am not being awarded MVP status this year has caused me to take stock of my situation.
Now that I’m no longer a paid shill of the Evil Empire, and they’ve taken away my free Compuserve account, I feel I can no longer use their products – mainly because I can no longer afford them if I can’t download them for free from MSDN and TechNet.
Microsoft has been widely derided in the security community for many years, and despite having invented, expanded and documented several secure development processes, practices and tools, it seems they still can’t ship a copy of Flash with Internet Explorer that doesn’t contain rolling instances of buffer overflows.
Microsoft make a great deal out of their SDL tools – documentation and threat modeling guides – and yet they still haven’t produced a version that runs on Mac or Linux systems, unlike Mozilla who’s been able to create a multi-platform threat modeling tool, called Seasponge. Granted it only lets you draw rudimentary data-flow diagrams, and provides no assistance or analysis of its own, requiring you to think of and write up your own threats – but it’s better than nothing! Not better than a whiteboard, granted, but vastly better than nothing.
Active Directory is touted along with its ability to provide central management by Group Policy Objects simply isn’t able to scale nearly as well as the Open Source competition of Linux, which allows each desktop owner to manage their own security to a degree of granularity that allows for some fantastic incoherence (ahem, “innovation”) between neighbouring cubicles. This is, after all, the Year of Linux on the Desktop.
Unlike Windows, with its one standard for disk encryption, and its one standard for file encryption, Linux has any number to choose from, each with some great differences from all the others, and with the support of a thriving community to tell you their standard is the de-facto one, and why the others suck. You can spend almost as much bandwidth discussing which framework to use as you would save by not bothering to encrypt anything in the first place – which is, of course, what happens while you’re debating.
Something something OpenSSL.
IPv6 has been a part of Windows since Windows XP, and has been enabled by default for considerably longer. And yet so very few of Microsoft’s web properties are available with an IPv6 address, something I’ve bugged them about for the last several years. Okay, so www.microsoft.com, www.bing.com and ftp.microsoft.com all have recently-minted IPv6 addresses, but what about www.so.cl? Oh, OK.
Then there’s the Windows TCP SYN behaviour, where a SYN arriving at a busy socket was responded to by a RST, rather than the silence echoed by every other TCP stack, and which was covered up by Windows re-sending a SYN in response to a RST, where every other TCP stack reports a RST as a quick failure. I can’t tell you how many years I’ve begged Microsoft to change this behaviour. OK, so the last time I spoke to them on this issue, my son was eight, and now he’s driving, so perhaps they’ve worked some more on that since then. It is, after all, a vital issue to support correct connectivity.
Finally, of course, the declining MVP swag quality has hit me hard, as I now have to buy my own laptop bag to replace the MVP ones that wore out and were never replaced, a result of Microsoft’s pandering to environmental interests by shipping a chunk of glass instead of a cool toy or bag each year.
My MVP toys were fun – a logo-stamped 1GB USB drive, a laser-pointer-pen-and-stylus which doesn’t work on capacitive touch screens, a digital photo frame – but never as much fun as those given to the MVPs in other Product Groups. The rumoured MVP compound in Florida available for weekend getaways always seemed to be booked.
So, how do I get MacOS installed on this Surface Pro 3?
Now that I have a Surface 2, I’m going to leave my laptop at home when I travel.
This leaves me with a concern – obviously, I’m going to play with some of my hobby software development while I have “down time”, but the devices for which I’m building are traveling with me, while the dev machine stays at home.
That’s OK where I’m building for the laptop, because it’s available by Remote Desktop through a Remote Desktop Gateway.
Deploying to my other devices – the Windows Phone and the Surface 2 running Windows RT – is something that I typically do by direct connection, or on the local network.
For the Windows Phone, there’s a Store called “Beta” as opposed to “Public”, into which you can deploy your app, make it available to specific listed users, and this will allow you to quickly distribute an app remotely to your device.
Details on how to do this are here.
The story on Windows Store apps appears, at first blush, to be far more dismal, with numerous questions online asking “is there a beta store for Windows like there is for the phone?”
The answer comes back “no, but that’s a great idea for future development”.
But it is completely possible to distribute app packages to your Windows RT and other Windows 8.1 devices, using Powershell.
The instructions at MSDN, here, will tell you quite clearly how you can do this.