Tech-Ed Amsterdam 2012: Day 2

Breakfast was the same as yesterday. I thought of going someplace else, but didn’t for 3 reasons. First, tech-ed starts at 08:30, and the restaurants are at the other side of the RAI. I would really have to hurry in order to get there, have a meal, and go to the RAI. Second, the food at the restaurants is so great that my evening meal makes things up again. And lastly … a sizable portion of the world population is dying of famine and dehydration. During the time it took for you to read this paragraph, several people just gave up and died. So it would be a bit snobbish to make a big deal out of it and scorn the perfectly good food I get given.


Now, tech-ed. There are noticeably more people here. Yesterday, I saw the exhibition hall when the builders were in the progress of building the booths. I have to say that it will be impressive if everything is finished this morning, because yesterday it was still an unholy mess, like you can expect on a big construction site that is nowhere near due date.


The other big hall where people gather in between session is completely devoid of seating arrangements. I’ll have to check out the exhibition hall later to see what it looks like. I am used to seeing these kinds of areas full of bean bags and other things that allow you to sit.


Of course yesterday evening I had to phone the home front and talk with my oldest daughter whom is apparently missing me very much. And she started asking about her present, because of course I can’t go ‘on work holiday’ without bringing back a present. I also had to explain who this ‘Kate person’ was that I had spent the day hanging out with. This was a point of interest for my wife as well J


My youngest is much more emotionally independent. She only has to know that I come back and that I’ll have a present, and she’ll be satisfied.


The key note


It was a usual Microsoft keynote, which started with a lot of action movie music and light effects. Lots of deep basses. The keynote was delivered by Brad Anderson, and intermediate speakers like Mark Russinovich. The room was almost completely full.


As an aside, I have to mention that this is the first ech-ed where the wifi experience is plain crappy. The network stays up, but the internet connection keeps crapping out. The occasional connection comes through, and then it stays up for a while and goes down again. At first I thought it was my laptop, but then I noticed the guy next to me having the same problems. I have a feeling that whoever was in charge failed to anticipate the load that 8000 nerds would place on the internet connection.


Mobile 3G and stuff like that seems to work though, given the number of mobile users who could connect to the demo application.


The keynote started with a quick overview of Microsoft Hyper-V and all the goodness you now get out of the box with Windows Server 2012. The number they showed certainly looked impressive. Per VM 64 cores, 1TB (or was it 4?) of memory, and over a million IOps of data transfer. It is very uncommon for vice presidents to mention the competition during a keynote. The name ‘VMWare’ was used quite a lot. I really had the feeling that Microsoft was throwing down a gauntlet.


In all fairness, the numbers shown were certainly impressive, and seem to give VMWare a run for its money. And the important consideration about that is that you do get a lot out of the box with Windows Server 2012, whereas with ESX you don’t. I am not an administering a VM host environment so I could be wrong, but it does look more flexible and powerful.


After that there was a demo of some of the Azure features coupled with Visual Studio 2012. As far as keynotes go, this one wasn’t too bad. The Azure demo made me change the selection of the next session I am going to. Originally I was going to see something about SQL in hybrid IT, but I decided to go to ‘Windows Azure Today and Tomorrow’  instead.


I should point out that there was not much to choose from for the first session slots. I have the impression that they kept Tuesday morning free form real content so that the late arrivals would not miss anything important.


FND05: Windows Azure Today and Tomorrow


This talk was hosted by Scott Guthrie. A very knowledgeable person for sure, but not a natural speaker like Mark Russinovich.


Scott explained a bit about Azure, and how the payment plan works. In a first for Microsoft (in my experience) you only pay for what you actually use. You can dynamically increase the cpu, memory, storage and other things when you need them, and you only pay for the time you are using them, after which you can just reduce your hardware / services.


A basic virtual machine with windows Server 2012 costs almost nothing. I would mention the cost here if the wireless actually worked and I could check. I am not entirely clear right now if the metric looks at hours in use, or hours running with a given configuration, and whether it counts the hours if the machine is shut down.


The latter seems weird of course, but consider my scenario: I want to have a machine that is performant enough for running Visual studio, debug my various hobby projects, and I want to be able to use that machine from anywhere. Currently, that is a windows 7 machine in my basement, with 4GB ram and a Core2Duo. It is getting dated but still fast enough.


However, I might need to replace that machine in the near future, and for the cost of buying a new machine, it might be worthwhile to run my dev machine in Azure cloud. Especially if it would not count the hours during which I am not using it, which would make it dirt cheap. And I would have the advantage of being able to work from my laptop in the living room, or a hotel room without needing to worry about my data or the performance metrics of whatever machine I am using.


There was some more talk about Azure and the various user scenarios. Mar Russinovich went in-depth in the next session, so I’ll not cover them here.


AZR208: Windows Azure virtual Machines and Virtual Networks


This session was hosted by Mark Russinovich. I’ve heard him speak before and he is a good speaker. Mark started with an explanation of Azure, and private clouds. One thing that was made clear is that cloud machines are just VHD files, just like normal Hyper-V machines. This means that transferring machines to and from the cloud, or different cloud providers, is completely transparent. There is no lock-in.


One way to use the cloud is to create a VM in the cloud, move your application there, and then scale up the VM as needed. On top of that, you could choose to run components of the application on cloud services. One such service is SQL server, which can be scaled up to Godzilla like proportions. The good thing (other than not having to maintain the monster hardware) is that Microsoft takes care of patching and other things.


In your virtual machines, you can also add storage on an as-needed basis, which will be backed by cloud storage solutions. The virtual disks will be stored on redundant disks in the SAN, meaning that you are isolated from normal disk problems that might occur. Your own disks can of course be configured for max performance, like striping.


Mark then covered things like how services are organized in different groups so that software patching and network maintenance can be done in a manner that there will no be resulting downtime for your applications.


And finally there was an explanation of virtual networks. It is a given that the different servers in your collective can talk to each other, but in an enterprise environment, you may want to domain-join those machines. And of course, you would not want that to happen over an internet connection. To that purpose, Azure supports a VPN connection to your own infrastructure. This is hardware VPN, and a nice feature is that Azure can generate VPN configuration scripts for most commone firewall manufacturers, like Cisco and Juniper. Once that is set up, those machines appear to be on your own local network.


I thought that that was a particularly cool feature. Because it allows a company to move a great lot of (non critical) machines up to the cloud, where their cost can be budgeted up front, and no on-site personel is needed to support the infrastructure. Currently, if you are running your own VM infrastructure, you are supporting it all, and you may have a lot of infrastructure, which may cost a lot more money than needed. Then there is the square meter price of your data center, electrical power and cooling, … even if you move only the non critical machines to Azure, a lot of benefit can be had.


A final thing that is worth mentioning is that Azure is currently run across 16 massive data centers which can take over from each other. So if one data center goes offline due to a meteor strike (to name a cool example), another can seamlessly take over. In fact, Mark mentioned that it is a hard promise that any data store change is replicated to at least one data center in the same geopolitical area within 15 minutes. This means that data from EU companies stays in the EU, and US data stays in the US, etc. For some people this is irrelevant, but many companies that are subject to regulatory bodies have strict requirements to make sure that certain data many never leave the EU or the US.


Before tech-ed, I had a fairly jaded view of Azure, but after the things I have seen in this and the previous sessions, I have come around. Azure (or clouds in general) are the way of the future. We are still in a transitional phase where companies start their own private clouds (Be they Hyper-V or ESX or something else) but between this and 10 years, I suspect that many companies will move a great deal of servers into a cloud that they don’t manage themselves. After all, why would they?


And given that there is no cost of entry, that you only pay for what you use and that you can scale up and down dynamically, there is no doubt in my mind that this will take off.


WSV205 Windows Server 2012 Overview


This talk was hosted by Michael Leworthy.


As soon as I sat down and he started talking and showed an overview of his topics, I realized I had made a big mistake. This was going to be a marketing talk, ‘look how great we are’-style. I gave it 5 more minutes in which I was proven correct, and I decided to leave. There are few enough Visual Studio talks as it is, and I changed to DEV213.


DEV213: What’s new in Visual Studio 2012


This session is hosted by Orville Mcdonald. It was already 10 minutes underway when I came in, but I managed to pick up easily enough.


Right when I came in, he was going to demo how easy it was to develop metro apps in Visual Studio 2012.The main reason for Metro is to have a unified approach to developing for multiple platforms, so that your app might be useable on your desktop, on your tablet and on your phone.


I cannot judge on how easy it is to develop metro style, but testing it sure looked greate. There is a simulator that can be used to test your app in real scenarios. The simulator can do all the things any real tablet could do. The orientation can be changed, you can ‘slide’ your finger, and do all manual manipulations in a simulated way.


Then there was a demo of migrating a web application from a local SQL express database to one hosted on Azure. I can’t comment much on this, except that it is what I would expect from a database migration. The fact that it is in the cloud is less interesting, and I talked about that already.


One of the annoying things in VS2012 that is new and which I cannot believe made it into the release candidate, are the full caps menus. It is the one and only application I know with a menu in all caps, and I hope they reassess that decision. It is loud. Don’t believe me? Consider HOW RELAXING IT IS TO SPEND ALL DAY LOOKING A MENU IN ALL CAPS! LOUD,  ISN’T IT?!?!


Seriously…


I was told that this will become optional in the RTM.


To be honest, I had hoped that this session would be more about language and debugging topics, but I guess Metro and Azure are the new kids on the block. In any case it was interesting so see how it works and how it can be debugged.


DEV316: Application lifecyle management tools for C++ in Visual Studio 2012


This session is hosted by Rong Lu. I managed to talk to her in private for a couple of minutes, because I wanted to know what exactly was covered. As soon I told her that I had been to Kate’s pre-conference talk, she told me that if I had any other place to go, it might be worthwhile doing that, because she’d cover the same topics. She told me she would cover those same topics a bit more in depth.


I thought to go to WCL332 instead, but that turned out to be about deployment tools and deployment diagnostics. Not really my cup of tea so I decided to go to DEV316 after all.


The first thing I noticed was that the crowd was the same as during Kate’s pre conference talks. No surprise really. The 30 of us are probably the only C++ programmers in the whole of tech-ed.


Rong covered architectural discovery. The main user scenario for this feature is analyzing code that someone else wrote. C++ codebases tend to live a long time, and many C++ programmers have to maintain or update code they didn’t wrote themselves. In short, the architectural discovery tool builds a diagram with the biary components. These components can then be broken down in explorable and expandable layers. It is possible to edit and save the diagram and mark it up.


This is sure a handy tool for analyzing other peoples code, as well as creating images for writing software design documents. It was undecided at this point, but this functionality will probably be reserved for the Ultimate editions of VS for the next release.


The next demo covered static code analysis. This works really user friendly. You can easily figure out the problem, and there is even a right-click mentu item for inserting the suppression pragma if you want so.


There are a couple hundred of rules that are checked by the code analysis. Rule sets are programmer configurable. 64 bit support, and all rules available from Pro version and above, including concurrency analysis rules.


The unit testing framework for unmanaged C++ was shown again. This is available for all versions, though it will be really basic below VS Professional. From Premium onward, continuous run after build is available, which allows you to run the unit tests with every build. The unit testing framework is extensible by 3d party framework.


Code coverage results were available with a single click to take you to the code coverage results. The code itself was then Blue lines vs red lines. It looked very well made, and it will certainly be very useful for insuring the quality of algorithms.


This session topic was very interesting, and Rong Lu held an excellent presentation. Kate Gregory was there as well.


Wrap-up


Day 2 was filled with lots of good information. Azure was the main surprise for me. And Rong Lu’s presentation was worthwhile as well.


One interesting factoid: This edition of tech-ed there are more C++ language talks than C#, VB and  F# combined J


Oh, and the internet connection stayed down until the end of the day. I was told at the wireless booth that they were trying to fix it. Wireless was up, but internet for the entire RAI had gone down. Perhaps they’ll figure it out by tomorrow.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>