Monthly Archives: June 2012

Mulling over the SBSC call

Gardening tasks this afternoon.  Bought another Moonshine Yarrow to match the other side of the front garden.  Planted a Buddleja Buzz Magenta in the back garden to bring in more bees and butterflies.  My first year planting of New Dawn Rose is putting on it’s second bloom.


On the geek front I’m still mulling over yesterday’s SBSC call with Eric Ligman and company.  Windows 8 lately is proving to me that Microsoft only listens to electronic feedback and poorly designed surveys with sampling errors of plus or minus 16 percent or so.  In their actions and deeds lately I do not hear them “listening”.  Truly listening.  They are a beancounter run company who wants metrics.  And quite frankly you and I do not give them metrics.  Lord knows I don’t.  If they looked me up in their Microsoft partner profile they’d see that I’ve not has a single SBS sale attributed to me.  Never.  Ever.  Yet how many customers of SBS, how many partners that sell and install SBS still, to this day, read this insane blogging that I do. 


It was good to hear other partners say in the call everything that I’ve been saying for months.  Microsoft when you say “we’ve talked to partners” when you’ve made these decisions… I wonder who are you talking to?  Do I think they’ll make changes and keep the SBSC?  I hope so.  They said that they needed to push partners to embrace the cloud.


ABSOLUTELY.  But gang, having a required exam of Office 365 is not all the things we need to know about the cloud.  As even Pinterest and Netflix found out this weekend, cloud deployments are not a walk in the park.  The cloud is more than Office 365, it’s more (quite frankly) that Microsoft right now.  Microsoft is the slow man out of the horserace and if the headlines are right… are not pushing the tools to the developers like they should if they want to be more in the cloud.


And don’t get me started that System Center Essentials is not a SMB solution.  Intune doesn’t cover Servers.  I said when Intune shipped that it wasn’t SMB enough and that it would turn into a mid market tool and I was right.


So will Microsoft listen to those in the SBSC program and keep it alive?


Time will only tell. 


In the meantime I need to get back to blogging about HyperV and SBS Essentials and pondering about internal wireless at the office and how the consumerization of IT (aka iPads inside the office) means that I feel like I’m lowering my wireless security standards.


More on that tomorrow.  Along with boring you with random chatter about my summer gardening tasks.

A new found love for Microsoft Small Business Server

I have always loved, supported and championed the Microsoft SBS product. I have worked with it since SBS 4.0 and always appreciated it.
I have installed every version of it and had a hand in some aspects of making it a better product.

Whilst I was a Microsoft MVP, I visited Redmond many times and had many discussions with the SBS project team.
I witnessed first hand the battles the project team had, getting simple things like the Exchange 2003 16Gb limit lifted.
The enterprise teams did not take too nicely to the SBS product. They did not understand it and did not want their product in the suite.
They saw no reason to integrate with it and no reason to give big features, at a small cost within the SBS suite. They did not want to cheapen their product.

I saw Microsoft ISA server ripped from the product in SBS 2008. I saw the SBS team stressed and I saw the product change.
As an old School Microsoft Small Business Specialist and a winner of the SMB150 for 2012, I thought I still loved SBS server.

Unfortunately, I got so used to the path SBS took and so used to the product, I lost a love for what the SBS project team does and what the product delivers.
I was a zombie and simply went through the motions and installed/setup SBS and then moved on.

Recently a project has reawakened my love for SBS. I had to do an enterprise installation.

The projects needs 2x Domain controllers, SharePoint Foundation server 2010, Exchange 2010 server, Threat Management Gateway (ISA), a DMZ (Perimeter network), an Exchange Edge Transport server, a Microsoft 2008 Web server, A SQL server, 3 or more Windows Enterprise servers in a load balanced/fault tolerant Remote Desktop server farm, a Wsus server, backup server (That images each server and creates Hyper V virtual machines) and much more.

We in SBS land are truly blessed. The Integration of Exchange, Window server (In Domain controller mode), file/print, Wsus, SharePoint Foundation server and more, all using blessed wizards.
No need to download prerequisites, manually install, add features/roles, make decisions or worry about all the tools integrating together and playing nicely.
Who needs to setup the finer details of email policies, Hub transports, IIS, Certificates, Wsus Policies and SharePoint? Not me. 

This is where the Wizards in SBS are truly wizardry. They make things simple. Sure I can manually configure DNS, DHCP and setup users in the Active Directory.
I can setup the network settings and even install Wsus. I can setup and configure SharePoint Foundation server. Or can I?
Using my knowledge of SBS, I can’t. I did not know that it can’t have it’s database on a domain controller. I did not know it was recommended to be on 2, maybe three servers.
The SBS project team must have worked incredibly hard to setup SharePoint Foundation on the same box as Active Directory and Exchange 2010.
Did you know Exchange should not be on a domain controller? I knew all this but had slowly forgotten and dismissed it.

Someone in the SBS team had to convince someone on the Exchange team to let them have their product installed on a domain controller. Convince them to allow SBS to have the Exchange product amongst it’s features. Then the hard work began. Someone had to setup wizards and an environment to tie it all in together and make it work for you and I, in a simple way.
Someone had to setup Remote Web Workplace, Wsus, Outlook Web Access and all the other Web based tools, onto the SBS IIS server.
Someone had to tie it all together and make it work.

In my current scenario. That is me. I am setting up this enterprise installation and I am downloading the prerequisites. I am tying everything together and I am finding all the dead ends.
I am doing the research. I am struggling with the different route tables and TMG firewall rules. I am taking many, many hours to setup something that would be over and done in much less time if this was SBS.

The SBS team have gone where we do not need to go. They have made it easy for us.They have given us something that works and is reproducible every time.
They have done loads of hard work so you do not have to.

So, now I need to stop taking SBS for granted. I need to understand where SBS comes from and what it really is. I have fallen in love with SBS all over again.

INotifyPropertyChanged Implementation for VS2012

I told some folks at the East Tennessee .NET UG (ETNUG, which was adorably put on the reservation board at the after meeting bar as “Eatnug”) meeting in Knoxville that I would post info on a .NET 4.5 version of INotifyPropertyChanged that used CallerMemberName. I also said I wanted to reference other people rather than making up on my own because I’d rather post something already vetted.


CallerMemberName is a new feature of .NET 4.5 that lets you access the name of the thing that called the current method. It’s implemented by adding an attribute on an optional parameter:


public void Foo([CallerMemberName] string callerMemberName = null)


The CallerMemberName attribute signals that the caller parameter should contain the name of the calling method, to be filled in automatically if the calling code doesn’t pass a value. Since it’s an optional value, the calling code doesn’t need to pass a value, and should only pass one when chaining the “real” caller member name. Unfortunately, Intellisense doesn’t help you out here, so a really good parameter name, consistent across your project or organization is a great idea: I like callerMemberName as a direct indicator of the purpose/usage of the parameter.


There are a couple of obvious places to use CallerMemberName – logging and implementing INotifyPropertyChanged.


I asked on Twitter and checked around the Internet, and there are a bunch of simple examples that illustrate CallerMemberName. Here’s one in English, and one in Hungarian. There’s also a version in the documentation for the INotifyProertyChanged interface. If you’re interested in how CallerMemberName works, check these out.


If you’re interested in a bang-up, super-great, sea-salt and malt vinegar (potato chip) version, check out Dan Rigby’s blog posts here and especially here where he evaluates versions of INotifyPropertyChanged that have been used elsewhere. The basics are that the Set method should contain exactly one method call, the event should not be raised unless the property is actually changed to a new value, no strings should be used to avoid typo bugs, and no reflection or callstack access should be used to maintain performance.


I really like the flow of information on the Internet when people find something good and add to it. I have two enhancements I want to add – I hate having SetProperty in every class and I didn’t find a VB version.


First, you can use a common base class to avoid redundant SetProperty methods. This simplifies the data class. Note that it’s common to supply a protected OnPropertyChanged method in a base class. If this is called from something other than the property, such as the SetProperty method, the CallerMemberName can be passed in, passing on or chaining the original property name:


 


public class Foo : FooBase
 {
     private int _bar;
     public int Bar
     {
         get { return _bar; }
         set { SetProperty(ref _bar, value); }
     }
 }

 public abstract class FooBase : INotifyPropertyChanged
 {
     public event PropertyChangedEventHandler PropertyChanged;

     protected bool SetProperty<T>(
             ref T storage, T value,
         [CallerMemberName] String callerMemberName = null)
     {
         if (object.Equals(storage, value)) return false;

         storage = value;
         this.OnPropertyChanged(callerMemberName);
         return true;
     }

     protected void OnPropertyChanged(
             [CallerMemberName] string callerMemberName = null)
     {
         var eventHandler = this.PropertyChanged;
         if (eventHandler != null)
         {
             eventHandler(this,
                  new PropertyChangedEventArgs(callerMemberName));
         }
     }
 }


Since I didn’t find this available in VB, I translated:


Public Class Foo
    Inherits FooBase

    Private _bar As Integer
    Public Property Bar As Integer
        Get
            Return _bar
        End Get
        Set(value As Integer)
            SetProperty(_bar, value)
        End Set
    End Property

End Class

Public Class FooBase
    Implements INotifyPropertyChanged

    Public Event PropertyChanged(sender As Object, e As PropertyChangedEventArgs) _
            Implements INotifyPropertyChanged.PropertyChanged

    Protected Function SetProperty(Of T)(ByRef field As T, value As T,
                    <CallerMemberName> Optional callerMemberName As String = Nothing) _
                    As Boolean
        If Object.Equals(field, value) Then Return False

        OnPropertyChanged(callerMemberName)
        field = value
        Return True
    End Function

    Protected Sub OnPropertyChanged(
                    <CallerMemberName> Optional callerMemberName As String = Nothing)
        RaiseEvent PropertyChanged(Me, New PropertyChangedEventArgs(callerMemberName))
    End Sub

End Class

Back in town

I’ve decided to start blogging again, on the subject of C++. A couple of years ago, just before the release of VS2010, I had become jaded with C++. The standard was still nowhere near finalized, Visual C++ was getting none of the ‘designer’ loving.

Sure, we had C++/CLI, but only after the abomination that was Managed C++. And while C++/CLI was a decent language and indeed ‘just worked’, the only thing it was good for was writing glue code to run native code in a managed wrapper. For all other things, C# was a vastly better choice.

Fast forward a couple of years, and it is a whole new world.

The standard has finally been ratified, C++ has gotten a much needed refresher (both language and library wise), it has suddenly become hip again with Metro and the need for fast code with a small footprint, and with interesting things like PPL, AMP and ALM, there is a brave new world to be discovered. I am excited about C++ again!

I am also not typing this on my development machine or my laptop, but on the Windows 2012 Server machine that I created in the Azure cloud. It is lovely to have a performant dev machine to play with. Given the very low cost of Azure VMs, I can’t really justify buying a new development machine when the old one kicks the bucket. And that is not even considering the benefits of having access to the machine everywhere, having it patched automatically, and never having to worry about hardware problems.

Ok, I suppose noone really missed me or even knew I was gone for 3 years. I also decided to come up with a new name on my blog. The cluebatman them was getting a bit dorky. C++ programming on cloud 9 is better for now. The new C++ standard has made me happy, and I am running my dev machine in the cloud.

Still cheesy…

when I’ll come up with something better, I’ll change it.

Anyway, I’m Back!’

Tech-ed Amsterdam 2012: Day 5

 


 


 


I checked out and brought my luggage with me to the RAI. There is a luggage / cloak room where almost no one drops off their stuff, so I am using that one instead of the main one. Hopefully, it’ll save me some time when it is time to leave.


Yesterday I hung out with Steve for a while. It’s things like these that make tech-ed more than just about learning. As I mentioned earlier, it is nice to stay in touch with people across years.


DEV332: Async made simple in Windows 8, with C# and VB.NET


This session is hosted by Dustin Campbell


Async is the norm for Windows RT, where asynchronous programming is the only way to program. Synchronous programming and blocking is no longer acceptable for user applications, in order to ensure that applications are responsive and scalable.


Futures are objects representing pieces of ongoing work. They are objects in which callbacks can be registered that have to be executed when the work is ready to be completed (like doing something with downloaded data. Futures are basically syntactic sugar to make existing async programming patterns more palatable. The only downside is that you get nested lambdas for tasks that execute in several steps. Apparently, this is called macaroni code.


To fix this, C# has await and async keywords.


Await will take the rest of a method, and hooks it up as callback for the asynchronous event which is being handled. The Async keyword is used on the method itself to tell the compiler it has to do this. The callback will always appen on the sme thread that the operation was started from, so resource contention is not a problem, because while the code is running, the thread is not doing anything else.


So while your source code looks like something that is executing synchronously, it is actually broken in different pieces which are executed asynchronously. This is really neat, and it hides a lot of the ugliness of asynchronous programming. Even if you are not programming for Windows 8, this is a valuable feature for regular applications that require asynchronous programming.


Exception handling is built in, because the underlying IAsync operation captures it and presents it to the caller. Exceptions can then also bubble up through various completion tasks, and can be handled simply in the event handler like you would normally do. This is sweet, and much, much, much more convenient than if you had to deal with it manually


SIA311: Sysinternals primer: Gems


This session is hosted by Aaron Margosis. I’ve sen him present a similar talk a couple of years ago.


The room is not full. Plenty of seats are left open. I think this has to do with the fact that it is the last day. Aaron announced that there would be a book signing, but also mention that in their infinite wisdom, the organizers have decided not to have a bookstore on site. Yeah… I noticed. Someone should have his ass kicked because of it.


The entire session was demo driven, so I didn’t take notes. It was mainly about the unknown utilities or unknown features of well known utilities in the sysinternals suite.


DEV334: C++ Accelerated Massive Parallelism in Visual C++ 2012


This session is hosted by Kate Gregory, and covers the new C++ AMP tools which allow you to offload number crunching to the GPU. The room is not full, I suspect it has roughly the same group of people who were also at the pre-con sessions.


The session started with the overview of why you want C++: Control, performance, portability.


With AMP, your code is sent to an accelerator. Today, this accelerator is your GPU, but other accelerators might appear. The libraries are contained in vcredist, so you can distribute your AMP app just like any other app. And because the spec is open, everyone can implement it, extend it or add to it. Apparently, Intel have already done that.


They key to moving data to and from the GPU is a class array_view<T,N>, which represents a multi dimensional array of whetever. You populate those structures, and then perform the parallel_for_each() library function. This function will do all the heavy lifting and data copying for you. When the parall_for_each finishes, the result will be ready for you.


Some restrictions:


You an only call (other) AMP functions. All functions must be inlineable, use only amp supported types, and you won’t be doing pointer redirections or other C++ tricks. There is a list of things that are allowed and not allowed, but they are really all common sense.


There is also array<T,N>, which is nearly identical to array_view, but if you want to get data out, it has to be manually copied. At least that was my understanding. Things are going fast at this point so it is possible I’ve missed something.


If you want to take more control of your calculation, you can use tiling. Each GPU thread in a tile has a small  programmable cache, which is identified by the new keyword tile_static. This is excellent for algorithms that use the same information over and over again. There is an overload for parallel_for_each which takes a tiled extent. However, the programmer is responsible for preventing race conditions -> use a proper use pattern with tile barriers


What is particularly interesting is that Visual Studio 2012 has support for debugging and visualization. You can choose debugger type CPU breakpoints and GPU breakpoints, and you need to debug on windows 8 apparently. It just works, and this was probably a huge chunk of work for someone, somewhere in the VS debugger team J


There is also a concurrency analyzer which is really good for figuring out CPU / GPU activity and how it correlates to your code.


Wrap-up


That’s it for today. Time to go home.


I am glad attention got called to the fact that there is no bookshop. I’ll have to put that in the official feedback as well. And speaking of silliness: this tech ed there was exactly 1 session about the new C# keywords for asynchronous programming, and one on .NET 4.5 features. And for some inexplicable reason, they got scheduled in the same timeslot. Someone dropped the ball there as well.


Tech ed was a valuable experience yet again. I’ll post an overall tech-ed wrap-up tomorrow.

Some feedback on the Metro-ing of Knowledge base articles

Gardening activities tonight – planted May Night Salvia and refreshed the Cocoa Mulch in the back rose garden.


Geek activities tonight… blogging to raise awareness over Microsoft change in KBs that slows me down and loses needed information and to hopefully get someone to understand and tweak changes they recently made to the Microsoft Knowledge base articles.


So tonight they “metro’d” the KB articles.  Fine, go Metro happy.  But how about not losing a key piece of info in the process of Metroing it?


Go to the sample of http://support.microsoft.com/default.aspx?scid=kb;en-us;2686509 



At the top where it used to have the article ID in small letters it also would say the revision number and the date.  Don’t see it do you?  To find it click on that “View other products” which jumps you down to the bottom of the KB article and then scroll back up a little bit.


But Susan, it’s only one click and a scroll, I mean it’s only a few more clicks and a few more seconds out of your life.  Really what are you crabbing about?


I watch RSS feeds of KB articles and when I see one of interest I look at it.  And the very second thing I look at after I read the title of the KB is the revision number and the date to determine if the KB was a brand new one or not.



See that?  The key info that I look at is now buried at the bottom of the KB.


Microsoft when you get feedback when you change stuff ASK PEOPLE THAT USE THE STUFF BEFORE YOU CHANGE IT PLEASE.


http://webcache.googleusercontent.com/search?q=cache:qK7oI_dywroJ:support.microsoft.com/kb/2686509+ms12-034:+Description+of+the+security+update+for+CVE-2012-0181+in+Windows+XP+and+Windows+Server+2003:+May+8,+2012&cd=1&hl=en&ct=clnk&gl=us


That’s a google cache of how they used to look.



See how the info used to be right up at the top at eye level?


So Microsoft, Metro all you want, but the principles of Metro design is that information is clear and easy to read.  Finding the release date and revision date no longer is in the Metro manner of delivery.


Please fix it.  Every second I don’t have to click and scroll means I have that many more seconds to plant in my garden.  Should get seeds for Mexican Sunflowers next week.

Upgrading to Windows 8..

To save me plagiarising, I will give you the link to Mary Jo Foley’s article..

http://www.zdnet.com/blog/microsoft/microsoft-details-its-windows-8-upgrade-plans/13051

There.. that was quick.. Smile

Techspot have a great little table which may be easier for you..

http://www.techspot.com/news/49196-windows-8-upgrade-paths-leaked-xp-vista-and-7-supported.html

As usual, cross platforms and languages require a clean install. Personally, I favour the clean install anyway and always have.

One thing I didn’t see mentioned was whether upgrading Windows 7 which has XP Mode installed would be smooth and that XP Mode would still work. I never took the time to find out if Windows 8 would even accept a fresh XP Mode installation, so I can’t say one way or the other.

I will say that since deleting Windows 8, I have actually missed having it around, but needs must and I had to do it to preserve Windows 7. I do not consider that it is worth compromising my production OS for a beta.

Bear in mind that the above is based on an element of rumour, but if it is true, XP users get a better deal than they did when upgrading to Windows 7..

Windows 8 is in the wings, waiting for the curtain call.. Good luck, everybody.. Smile

WCF Async Queryable Services features

I already made two videos on WCF Async Queryable Services architecture and tooling.

I just published a new one to present WAQS features.

WCF Async Queryable Services Features

Now you can enjoy with WAQS!

Give me your feedbacks please

Windows 8 – Preliminary list of Security improvements

Windows 8 will provide further security improvements and a preliminary list is noted below:

How Windows 8 Beefs Up Security http://www.securitynewsdaily.com/2008-windows-8-security.html

QUOTE:  Windows 8 promises to be much more secure than Windows 7 — so much so that some users might not like it.  Chris Valasek, a researcher with the San Francisco security firm Coverity, has been playing with the developer preview version of Windows 8 since last fall.  He told the British tech blog the Register that while the internal structure is not too different from that of Windows 7, there are a few new features that will nonetheless beef up Windows 8′s security considerably.

App store – New Windows 8 Apps will be contained by a much more restrictive security sandbox

Internet Explorer 10 — Locking down the browser with improved Flash & Java protection and other safeguards

Secure Boot — It means that all installed operating systems, whether on a hard drive or on an optical drive, will be checked for digital certificates of authenticity before they’re allowed to start the machine.

Windows Defender — Windows 8 will have a Microsoft first — a built-in anti-virus software installation

DSNChanger Malware – FBI will take infected PCs offline on 07/09/2012

In about 10 days, the FBI will carry out another stage of malware cleanup as noted below

DSNChanger Malware – FBI will take infected PCs offline on 07/09/2012 http://www.securitynewsdaily.com/2030-dnschanger-deadline.html

DNS-CHANGER MALWARE test site (if you see RED your PC may be infected … GREEN indicates no infection is present)
http://dns-ok.us/

QUOTE: In 10 days, there’s a chance you will not be able to access the Internet on your personal computer. No email, no Facebook, no Google, no Twitter — nothing.  This potentially dire situation is due to the nasty DNSChanger Trojan, and the fateful date of July 9, on which the FBI is set to take all computers still infected with the malware offline for good. 

Launched by Estonian cybercriminals, the DNSChanger malware infected Windows PCs, Macs and routers across the world and enabled the crooks to hijack victims’ Web traffic and reroute it to rigged sites. After the FBI, in “Operation Ghost Click,” busted the criminals last November, the FBI set up surrogate servers to keep the computers infected with the Trojan temporarily online so users could clean them.

But on July 9, those surrogate servers are coming down.  In his Krebs on Security blog, researcher Brian Krebs cites a statistic from the DNSChanger Working Group, which estimates that more than 300,000 computers are still infected with the malware.

Recent Comments

Archives