Yeah, so, I was apparently deluded, the problem is still here. It appears to be a bona-fide bug in Windows 8, with a Hotfix at http://support.microsoft.com/kb/2797356 – but that’s only for x86 versions of Windows, and not for the Surface 2.
Since I wrote this article, another issue caused me to reset my WMI database, by deleting everything under C:\Windows\System32\wbem\Repository and rebooting. After that, the VPN issues documented in this article have gone away.
I have a home VPN – everyone should, because it makes for securable access to your home systems when you are out and about, whether it’s at the Starbucks down the street, or half way across the world, like I was on my trip to China last week.
Useful as my home VPN is, and hard as it is to get working (see my last post on Windows 8 VPN problems), it’s only useful if I can get my entire computer to talk through the VPN.
Sidebar – VPN split tunneling
Note that I am not disputing the value of split tunneling in a VPN, which is where you might set up your client to use the VPN only for a range of addresses, so that (for example) a computer might connect to the VPN for connections to a work intranet, but use the regular connectivity for the major part of the public web. For this article, assume I want everything but my link-local traffic to be forwarded to my VPN.
So, in my last VPN post, we talked about setting up the client end of a VPN, and now I want to use it.
Connecting is the easy part, and once connected, most of my apps on the Surface 2 work quite happily, connecting to the Internet through my VPN.
All of the Desktop apps seem to work without restriction, but there are some odd gaps when it comes to using “Windows Store” apps, also known as “Metro” or “Modern UI” apps. Microsoft can’t call this “Metro” any more, even though that’s the most commonly used term for it, so I’ll follow their lead and call this the “Modern UI” [where UI stands for User Interface].
Most glaring of all is the Modern UI Internet Explorer, which doesn’t seem to allow any connections at all, simply displaying “This page can’t be displayed”. The exception to this is if I connect to a web server that is link-local to the VPN server.
I’d think this was a problem with the way I had set up my VPN server, or my client connection, if it weren’t for the fact that my Windows 8.1 laptop connects correctly to this same VPN with no issues on Modern or Desktop versions of Internet Explorer, and of course the undeniable feature that Internet Explorer for the Desktop on my Surface 2 also works correctly.
I’d like to troubleshoot and debug this issue, but of course, the only troubleshooting tools for networking in the Surface 2 run on the Desktop, and therefore work quite happily, as if nothing is wrong with the network. And from their perspective, this is true.
Of course, Internet Explorer has always been claimed by Microsoft to be a “part of the operating system”, and in Windows 8.1 RT, there is no difference in this respect.
Every Modern UI application which includes a web control, web view, or in some way asks the operating system or development framework to host a web page, also fails to reach its intended target through the VPN.
Technical support had me try a number of things, including resetting the system, but none of their suggestions had any effect. Eventually I found a tech support rep who told me this is a bug, not that that is really what you’d call a resolution of my problem. These are the sort of things that make it clear that the Surface is still in its early days, and while impressive, has a number of niggling issues that need “fit and finish” work before significant other features get added.
It should be easy enough to set up a VPN in Windows, and everything should work well, because Microsoft has been doing these sorts of things for some years.
Sure enough, if you open up the Charms bar, choose Settings, Change PC Settings, and finally Network, you’re brought to this screen, with a nice big friendly button to add a VPN connection. Tapping on it leads me to the following screen:
No problems, I’ve already got these settings ready to go.
Probably not the best to name my VPN settings “New VPN”, but then I’m not telling you my VPN endpoint. So, let’s connect to this new connection.
So far, so good. Now it’s verifying my credentials…
And then we should see a successful connection message.
Not quite. For the search engines, here’s the text:
Error 860: The remote access connection completed, but authentication failed because of an error in the certificate that the client uses to authenticate the server.
This is upsetting, because of course I’ve spent some time setting the certificate correctly (more on that in a later post), and I know other machines are connecting just fine.
I’m sure that, at this point, many of you are calling your IT support team, and they’re reminding you that they don’t support Windows 8 yet, because some lame excuse about ‘not yet stable, official, standard, or Linux”.
Don’t take any of that. Simply open the Desktop.
What? Yes, Windows 8 has a Desktop. And a Command Prompt, and PowerShell. Even in the RT version.
Oh, uh, yeah, back to the instructions.
Forget navigating the desktop, just do Windows-X, and then W, to open the Network Connections group, like this:
Select the VPN network you’ve created, and select the option to “Change settings of this connection”:
In the Properties window that pops up, you need to select the Security tab:
OK, so that’s weird. The Authentication Group Box has two radio buttons – but neither one is selected. My Grandma had a radio like that, you couldn’t tell what station you were going to get when you turn it on – and the same is generally true for software. So, we should choose one:
It probably matters which one you choose, so check with your IT team (tell them you’re connecting from Windows 7, if you have to).
Then we can connect again:
And… we’re connected.
Now for another surprise, when you find that the Desktop Internet Explorer works just fine, but the “Modern UI” (formerly known as “Metro”) version of IE decides it will only talk to sites inside your LAN, and won’t talk to external sites. Oh, and that behavior is extended to any Metro app that embeds web content.
I’m still working on that one. News as I have it!
I’m putting this post in the “Programmer Hubris” section, but it’s really not the programmers this time, it’s the managers. And the lawyers, apparently.
Well, yeah, it always does, and this time what set me off is an NPR article by Tom Gjelten in a series they’re currently doing on “cybersecurity”.
This article probably had a bunch of men talking to NPR with expressions such as “hell, yeah!” and “it’s about time!”, or even the more balanced “well, the best defence is a good offence”.
Absolute rubbish. Pure codswallop.
Kind of, and no.
We’re certainly not being “attacked” in the means being described by analogy in the article.
"If you’re just standing up taking blows, the adversary will ultimately hit you hard enough that you fall to the ground and lose the match. You need to hit back." [says Dmitri Alperovitch, CrowdStrike’s co-founder.]
Yeah, except we’re not taking blows, and this isn’t boxing, and they’re not hitting us hard.
"What we need to do is get rid of the attackers and take away their tools and learn where their hideouts are and flush them out," [says Greg Hoglund, co-founder of HBGary, another firm known for being hacked by a bunch of anonymous nerds that he bragged about being all over]
That’s far closer to reality, but the people whose job it is to do that is the duly appointed law enforcement operatives who are able to enforce law.
"It’s [like] the government sees a missile heading for your company’s headquarters, and the government just yells, ‘Incoming!’ " Alperovitch says. "It’s doing nothing to prevent it, nothing to stop it [and] nothing to retaliate against the adversary." [says Alperovitch again]
No, it’s not really like that at all.
There is no missile. There is no boxer. There’s a guy sending you postcards.
Yep, pretty much exactly that.
Every packet that comes at you from the Internet is much like a postcard. It’s got a from address (of sorts) and a to address, and all the information inside the packet is readable. [That’s why encryption is applied to all your important transactions]
There’s a number of ways. You might be receiving far more postcards than you can legitimately handle, making it really difficult to assess which are the good postcards, and which are the bad ones. So, you contact the postman, and let him know this, and he tracks down (with the aid of the postal inspectors) who’s sending them, and stops carrying those postcards to you. In the meantime, you learn how to spot the obvious crappy postcards and throw them away – and when you use a machine to do this, it’s a lot less of a problem. That’s a denial of service attack.
Then there’s an attack against your web site. Pretty much, that equates to the postcard sender learning that there’s someone reading the postcards, whose job it is to do pretty much what the postcards tell them to do. So he sends postcards that say “punch the nearest person to you really hard in the face”. Obviously a few successes of this sort lead you to firing the idiot who’s punching his co-workers, and instead training the next guy as to what jobs he’s supposed to do on behalf of the postcard senders.
I’m sure that my smart readers can think up their own postcard-based analogies of other attacks that go on, now that you’ve seen these two examples.
Sure, send postcards, but unless you want the postman to be discarding all your outgoing mail, or the law enforcement types to turn up at your doorstep, those postcards had better not be harassing or inappropriate.
Even if you think you’re limiting your behaviour to that which the postman won’t notice as abusive, there’s the other issue with postcards. There’s no guarantee that they were sent from the address stated, and even if they were sent from there, there is no reason to believe that they were official communications.
All it takes is for some hacker to launch an attack from a hospital’s network space, and you’re now responsible for attacking an innocent target where lives could actually be at risk. [Sure, if that were the case, the hospital has shocking security issues of its own, but can you live with that rationalisation if your response to someone attacking your site winds up killing someone?]
I don’t think that counterattack on the Internet is ethical or appropriate.
Saw this update in my Windows Update list recently:
As it stands right now, this is what it says (in part):
OK, so I started off feeling good about this – what’s not to like about the idea that DTLS, a security layer for UDP that works roughly akin to TLS / SSL for TCP, now can be made a part of Windows?
Sure, you could say “what about downstream versions”, but then again, there’s a point where a developer should say “upgrading has its privileges”. I don’t support Windows 3.1 any more, and I don’t feel bad about that.
No, the part I dislike is this one:
Note DTLS provides TLS functionalities that are based on the User Datagram Protocol (UDP) protocol. Because TLS is based on the Transmission Control Protocol (TCP) protocol, DTLS performs better than TLS.
That’s just plain wrong. Actually, I’m not even sure it qualifies as wrong, and it’s quite frankly the sort of mis-statement and outright guff that made me start responding to networking posts in the first place, and which propelled me in the direction of eventually becoming an MVP.
Yes, I was the nerdy guy complaining that there were already too many awful networking applications, and that promulgating stupid myths like “UDP performs better than TCP” or “the Nagle algorithm is slowing your app down, just disable it” causes there to be more of the same.
But I think that’s really the point – you actually do want nerds of that calibre writing your network applications, because network programming is not easy – it’s actually hard. As I have put it on a number of occasions, when you’re writing a program that works over a network, you’re only writing one half of the application (if that). The other half is written by someone else – and that person may have read a different RFC (or a different version of the protocol design), may have had a different interpretation of ambiguous (or even completely clear) sections, or could even be out to destroy your program, your data, your company, and anyone who ever trusted your application.
Surviving in those circumstances requires an understanding of the purity of good network code.
Bicycle messengers are faster than the postal service, too. Fast isn’t always what you’re looking for. In the case comparing UDP and TCP, if it was just a matter of “UDP is faster than TCP”, all the world’s web sites would be running on some protocol other than HTTP, because HTTP is rooted in TCP. Why don’t they?
Because UDP repeats packets, loses packets, repeats packets, and first of all, re-orders packets. And when your web delivery over UDP protocol retransmits those lost packets, correctly orders packets, drops repeated packets, and thereby gives you the full web experience without glitches, it’s re-written large chunks of the TCP stack over UDP – and done so with worse performance.
Don’t get me wrong – UDP is useful in and of itself, just not for the same tasks TCP is useful for. UDP is great for streaming audio and video, because you’d rather drop frames or snippets of sound than wait for them to arrive later (as they would do with TCP requesting retransmission, say). If you can afford to lose a few packets here and there in the interest of timely delivery of those packets that do get through, your application protocol is ideally suited to UDP. If it’s more important to occasionally wait a little in order to get the whole stream, TCP will outperform UDP every time.
Never choose UDP over TCP because you heard it goes faster.
Choose UDP over TCP because you’d rather have packets dropped at random by the network layer than have them arrive any later than the absolute fastest they can get there.
Choose TCP over UDP because you’d rather have all the packets that were sent, in the order that they were sent, than get most / many / some of them earlier.
And whether you use TCP or UDP, you can now add TLS-style security protection.
I await the arrival of encrypted UDP traffic with some interest.
This year is a special one for anniversaries – my 45th birthday, 20 years since I arrived in the USA, 10 years since beating cancer – seems like the perfect time for ISOC to honour me by switching everyone to IPv6.
IANA just held a ceremony (streamed live, and with a press conference following at 10amEST) to hand out the last of the IPv4 /8 blocks to Regional Internet Registries – RIRs.
It’s a quiet, but historical moment, as it truly marks the time we can finally tell people “yes, I know nothing appeared to be happening, but finally it’s happened”. Preparing for IPv6 has to happen, because there just isn’t any stopping this particular juggernaut. IPv4 addresses will run out, and there will arise a time when web sites can no longer find a public IPv4 address.
BEFORE that happens, something has to change to allow us to work together on an IPv6 Internet. I’m doing what I can.
As a client user, I live on the Hurricane Electric IPv6 Tunnel Broker, because Comcast have yet to extend their IPv6 trial to my neck of the woods, seeing as how I live in technology-deprived Seattle.
I’m still trying to persuade my web site’s ISP, 1&1, to put an IPv6 capability in place before World IPv6 Day on June 8, so I can host my web page there in IPv6, but I definitely have my FTP server software, WFTPD and WFTPD Pro, ready to support IPv6 fully.
What are you doing?
OK, so IPv4 is probably right to be acting like the old man in Monty Python and the Holy Grail, and screaming “I’m not dead yet!”, but we certainly shouldn’t hold out any hope that it’ll be getting any better. Clonk it on the head as soon as possible, because really, it’s been extremely poorly for many years now.
I’ve mentioned before that my biggest argument that IPv4 has already exhausted itself is the mere presence of aggregating NATs – Network Address Translators, whose sole purpose is to take multiple hosts inside a network, and expose them to the outside as if they were really only processes on one host with one IP address. If IPv4 were large enough, we wouldn’t have needed these at all, and at best, they were a stop-gap measure, and an inconvenient one at that.
Well, now we can’t really stop the gap any longer. We’ve hit the first of a set of dominoes that leads to us not even having enough IPv4 addresses to support the Internet with NATs in place.
That’s right, no more /8 networks are left in IANA’s pool to assign. OK, I know it says ”5/256” are left, but that’s only because the IANA (Internet Assigned Numbers Authority) haven’t yet announced that they’ve given out those last five, and they have previously announced that when they get down to five, those five will automatically be distributed.
Yesterday, the counter said “7/256”, but earlier today, APNIC – the RIR for the Asia & Pacific region (RIR – Regional Internet Registry) – bought two entries to serve their ever-growing Internet market. That will trigger the IANA to distribute the remaining five blocks.
And no, Egypt’s IP blocks are not available for re-use.
Yes, that’s right, this isn’t a “shut everything off and go home” moment – as I said before, this is merely the tipping of an early domino in a chain. Next, the last five /8s will be given to the five RIRs, and then they will use those to continue handing out addresses to their ISPs. At some point, the supply will dry up, and it will either become impossible, or expensive, to get new public-facing addresses. Existing addresses will still work even then, of course, and several of the new IPv4 address assignments will, ironically, be aggregating NATs that will allow the IPv6 Internet to access old IPv4 sites!
The only question now is what we call this momentous slide into IPv4 exhaustion. Certainly, RagNATok has a pleasant ring to it, as it invokes the idea of a twilight of the old order, a decay into darkness, but this time with a renewal phase, as the new Internet, based entirely on IPv6, rises, if not Phoenix-like from the ashes, then at least alongside, and eventually much larger than the old IPv4 Internet.
As you can tell from my tone, I don’t think it’s doom and gloom – I’m quite looking forward to having the Internet back the way I remember it – with every host a full-class node on the network. It’s going to mean some challenges, particularly in the world of online security, where there will be new devices to buy (bigger addresses mean larger rule-sets, and existing devices are already pretty much operating at capacity), new terminology to learn, and new reasons to insist on best practices (authentication by IP address was never reliable, and is particularly a bad idea when every host has multiple addresses by default, and by design will change its source address on a regular basis).
Perhaps the Mayans were right in deciding that 2012 is the year when everything changes (to borrow a line from Torchwood).
The rather unassuming name that has been chosen for this particular date – when the last assignment leaves the IANA – is “X-Day” – X as in “eXhaustion”.
The next date for your calendars, then, is World IPv6 Day, two days after my birthday, or for those of you that don’t know me, June 8, 2011, which is when major Internet presences including Google, Yahoo and others, will be switching on full IPv6 service on their main sites, and seeing what breaks. Look forward to that, and in the meantime, test some known IPv6 sites, like http://ipv6.google.com to ensure that you’re getting good name resolution and connectivity.
If you’re running an FTP server on Windows, I encourage you to contact me at firstname.lastname@example.org if you would like to test WFTPD or WFTPD Pro for IPv6 connectivity. We are currently beta-testing a version with much greater IPv6 support than before.
I saw the Belkin Play N600 HD router (F7D8301) at Costco a couple of days ago, for a very good price.
I’d been looking for a good price on an 802.11n router for some time – partly to increase coverage through my house, but also to ensure that I had a new router that would cope with improving technology as I buy it over the next few years.
Sadly, this router isn’t it – there are several existing protocols that it just doesn’t support, which is rather odd for a new router.
Specifically, I note that the router does not state support in its interface for PPTP or IPsec passthrough – protocols 47, 50, 51. When I asked the Belkin tech support about this, they directed me to “try forwarding ports on the router”, apparently not aware that there is a difference between port and protocol forwarding. That’s an astonishing lapse in ability and knowledge for technical support on a router, and doesn’t give me much comfort that the router itself is developed with skill or knowledge.
Another protocol not supported by this router, which seems just crazy when we’re one hundred days away from X-day, is IPv6. That’s right, IPv6, the next-generation Internet Protocol, required for numerous features of modern Windows systems such as HomeGroups, DirectAccess, etc (I’m sure there are IPv6-only features for Mac and Linux, but those aren’t my specialisation), and it isn’t supported. You can connect to the router as a wireless client, but the IPv6 protocol, access to local DHCP servers, etc, isn’t supplied to your host computer. My Linksys WRT54GL has supported that for several years, and this new router from Belkin can’t handle it.
Also unsupported is “6in4” (aka v6tunnel), as used in IPv6 tunnel schemes such as http://tunnelbroker.net, which is how I make my network a part of the global IPv6 Internet until Comcast gets around to supporting native IPv6 service. Again, this wouldn’t require the router to understand anything about IPv6, just to forward IPv4 protocol 41 correctly.
In addition to missing such basic functionality, the Belkin Play N 600 HD also fails in the reliability stakes. Two days it’s been in our house, and both mornings, we’ve woken up to a complete lack of Internet service and wireless connectivity, although the light on the front of the router is solid green, indicating that it thinks everything is fine.
Pinging the router does nothing, restarting computers (in the vain hope that it might be a wireless card issue, or some network driver failure, though our network has been fine for many years) does nothing. The only action that has an effect is that of restarting the Belkin router. Clearly, the Belkin can’t make it through the night without locking up.
As if to pour salt onto the wound, this router isn’t even able to increase range in our house – the boy still can’t get a connection from his room on his iPod. Perhaps that’s not such a bad thing, but since we were hoping to increase range with the router’s ability to pick signals out with MIMO technology, it seems like there really isn’t much point to us keeping the router.
Costco’s return policy is pretty reliable in cases like these – we take the failed device back, say that it wasn’t capable of reliable, basic use, and they refund us our purchase price. I’ll be giving it just a couple more days, in case Belkin has any hope to offer in terms of support of basic network router functionality, but I suspect I’ll just have to suck up the extra cost of using plain old reliable Linksys.
I’m visiting the in-laws in Texas this weekend, and I use the SSTP VPN in Windows Server 2008 R2 to connect home (my client is Windows 7, but it works just as well with Vista). Never had many problems with it up until this weekend.
Apparently, on Friday, we had a power cut back at the house, and our network connectivity is still not happening. I’ve asked the house-sitter to restart the servers and routers where possible, but it’s still not there.
So I went online to Comcast, to track down whether they were aware of any local outage. Sadly not, so we’ll just have to wait until I get home to troubleshoot this issue.
What I did see at Comcast, though, got me really excited:
So I can’t but be excited that my local ISP, Comcast, is looking to test IPv6 support. I only hope that it’ll work well with the router we have (and the router we plan to buy, to get into the Wireless-N range). Last time I was testing IPv6 connectivity, it turned out that our router was not forwarding the GRE tunneling protocol that was used by the 6-in-4 protocol used by Hurricane Electric’s Tunnel Broker.
Who knows what other connectivity issues we’re likely to see with whatever protocol(s) Comcast is going to expect our routers and servers to support? I can’t wait to find out
This will be the first of a couple of articles on FTP, as I’ve been asked to post this information in an easy-to-read format in a public place where it can be referred to. I think my expertise in developing and supporting WFTPD and WFTPD Pro allow me to be reliable on this topic. Oh, that and the fact that I’ve contributed to a number of RFCs on the subject.
First, a quick refresher on TCP – every TCP connection can be thought of as being associated with a “socket” at each device along the way – from one computer, through routers, to the other computer. The socket is identified by five individual items – the local IP address, the local port, the remote IP address, the remote port, and the protocol (in this case, the protocol is TCP).
Firewalls are essentially a special kind of router, with rules not only for how to forward data, but also rules on connection requests to drop or allow. Once a connection request is allowed, the entire flow of traffic associated with that connection request is allowed, also – any traffic flow not associated with a previously allowed connection request is discarded.
When you set up a firewall to allow access to a server, you have to consider the first segment – the “SYN”, or connection request from the TCP client to the TCP server. The rule can refer to any data that would identify the socket to be created, such as “allow any connection request where the source IP address is 10.1.1.something, and the destination port is 54321”.
Typically, an external-facing firewall will allow all outbound connections, and have rules only for inbound connections. As a result, firewall administrators are used to saying things like “to enable access to the web server, simply open port 80”, whereas what they truly mean is to add a rule that applies to incoming TCP connection requests whose source address and source port could be anything, but whose destination port is 80, and whose destination address is that of the web server.” This is usually written in some short hand, such as “allow tcp 0.0.0.0:0 10.1.2.3:80”, where “0.0.0.0” stands for “any address” and “:0” stands for “any port”.
For an FTP server, firewall rules are known to be a little trickier than for most other servers.
Sure, you can set up the rule “allow tcp 0.0.0.0:0 10.1.2.3:21”, because the default port for the control connection of FTP is 21. That only allows the control connection, though.
What other connections are there?
In the default transfer mode of “Stream”, every file transfer gets its own data connection. Of course, it’d be lovely if this data connection was made on port 21 as well, but that’s not the way the protocol was built. Instead, Stream mode data connections are opened either as “Active” or “Passive” connections.
The terms "Active" and "Passive" refer to how the FTP server connects. The choice of connection method is initiated by the client, although the server can choose to refuse whatever the client asked for, at which point the client should fail over to using the other method.
In the Active method, the FTP server connects to the client (the server is the “active” participant, the client just lies back and thinks of England), on a random port chosen by the client. Obviously, that will work if the client’s firewall is configured to allow the connection to that port, and doesn’t depend on the firewall at the server to do anything but allow connections outbound. The Active method is chosen by the client sending a “PORT” command, containing the IP address and port to which the server should connect.
In the Passive method, the FTP client connects to the server (the server is now the “passive” participant), on a random port chosen by the server. This requires the server’s firewall to allow the incoming connection, and depends on the client’s firewall only to allow outbound connections. The Passive method is chosen by the client sending a “PASV” command, to which the server responds with a message containing the IP address and port at the server that the client should connect to.
So in theory, your firewall now needs to know what ports are going to be requested by the PORT and PASV commands. For some situations, this is true, and you need to consider this – we’ll talk about that in part 2. For now, let’s assume everything is “normal”, and talk about how the firewall helps the FTP user or administrator.
If you use port 21 for your FTP server, and the firewall is able to read the control connection, just about every firewall in existence will recognise the PORT and PASV commands, and open up the appropriate holes. This is because those firewalls have an Application Level Gateway, or ALG, which monitors port 21 traffic for FTP commands, and opens up the appropriate holes in the firewall. We’ve discussed the FTP ALG in the Windows Vista firewall before.
Where does port 20 come in? A rather simplistic view is that administrators read the “Services” file, and see the line that tells them that port 20 is “ftp-data”. They assume that this means that opening port 20 as a destination port on the firewall will allow FTP data connections to flow. By the “elephant repellant” theory, this is proved “true” when their firewalls allow FTP data connections after they open ports 21 and 20. Nobody bothers to check that it also works if they only open port 21, because of the ALG.
OK, so if port 20 isn’t needed, why is it associated with “ftp-data”? For that, you’ll have to remember what I said early on in the article – that every socket has five values associated with it – two addresses, two ports, and a protocol. When the data connection is made from the server to the client (remember, that’s an Active data connection, in response to a PORT command), the source port at the server is port 20. It’s totally that simple, and since nobody makes firewall rules that look at source port values, it’s relatively unimportant. That “ftp-data” in the Services file is simply so that the output from “netstat” has a meaningful service name instead of “:20” as a source port.
Next time, we’ll expand on this topic, to go into the inability of the ALG to process encrypted FTP control traffic, and the resultant issues and solutions that face encrypted FTP.