9 Sep 2011

On Change

Author: onq | Filed under: Pontifications
One average Tuesday morning in September, I headed into my IT department’s manager/directory meeting like we did every other week. We went through the various reports via PowerPoint and had nothing that was particularly exciting or different that was coming up in our schedule. As we concluded the meeting and started to head back to our offices upstairs, several of us did what we always do at the end of the meeting – check our pagers for headlines using the news subscriptions that came included with our pager accounts. I had just flipped my pager open when I heard one of my peers say “Oh God, a plane has hit the Word Trade Center in New York City.” At that point, all of us opened our pagers to see what the headline alerts said, and quickly made our way upstairs to check out the TV in our small conference room.

 

By the time we got up there, several of our staff were already congregating in the conference room around the TV watching the live news coverage. By the time I got up there, the second plane had just struck the second tower. Needless to say, a pallor had fallen over the otherwise usually energetic feel of our 2nd floor group. When I finally got back to my office, I checked voicemail and had a message from my mother just saying that she figured I had heard the news by now and that she loved me and would talk to me when convenient. I called and left a voicemail for my wife, who is a school teacher, to call me when she had a chance. Since the office building is located across the freeway and just a few blocks down from the Dallas World Trade Center, there was some talk of evacuating our building and the surrounding area early in the day, then once more became clear about the nature of the attacks, talk of evacuation was lifted, and I took and early and long lunch break to go find a hand-held TV so I could follow the news coverage from my office, where my staff was already coming to track me down to talk about what the attacks meant to us in particular and our company in general. Then around 2 in the afternoon, our department director called an all-staff meeting (only the second one that had been called in the 9 months I’d been working there) where we as a department talked about the impact the attacks had on us personally, departmentally, and as a global company.

 

Over the course of that afternoon, it became perfectly clear that life as we knew it had come to an end, and we were all going to enter a new reality, and no one knew what that reality would look like. Within a couple of months, I had first hand experience with the new reality of air travel, as I had the opportunity to fly to Chicago for the company to do some training with a group that had just been acquired through a merger. I was already terrified of flying, but all the changes that happened since 9/11 had taken that to a new level. I compromised with my company by agreeing to fly to Chicago if I could return by train (I wanted to do a round-trip by train, but I couldn’t get to Chicago in time by train). Well, that set off all kinds of alarms with the TSA – someone who hadn’t flown in nearly 5 years gets booked on a one-way flight to Chicago for no apparent reason. Yep, I got “randomly” selected for extra security screening. Twice. And almost again right at the gate.

 

Since 9/11, we’ve all adapted in one way or another to the changed reality that event brought to all of us. And as I recall the events around those first few days after the attack, I can’t help but relate that to another change in my life.

 

One Monday in January I went to see my doctor about a strange pain in the right side of my abdomen. I thought I may have pulled a muscle and hoped that he could just get me something for the pain and recommend ways to keep from injuring the muscle even further. As a matter of course, he drew some blood, since we hadn’t had any bloodwork done in a while, and wrote a script for a pain pill. The next afternoon, I got a call from him regarding my blood test. The test showed that a couple of liver enzymes were showing on the high side of normal, and he’d like me to get a CT scan just to rule out any problems wiht the liver. We were able to schedule the scan for the very next Thursday, as I was scheduled to fly to Seattle for MVP Summit on that same Saturday. Since I have trouble with iodine contrast, they allowed me to do the scan just drinking the oral contrast (if you’ve had an abdominal CT done, you know that lovely chalky stuff that they tell you is cherry or orange flavored to try to get you to think it’s not as horrible as it actually is). I got done with the scan quickly and went right back to work, still feeling yucky in my stomach thanks to that chalky crap. Just a few hours later, my doctor called and said that the CT scan showed a growth in my liver that wasn’t really clear on the scan, and he wanted to have an MRI done to see if it was a fatty liver tumor or something else. Amazingly, he scheduled the MRI for the following morning. Usually it takes several days to get scheduled at this facility, so I was a bit surprised, and when I asked him about it, he said he had pushed it through becuase he knew I was leaving town for a week and didn’t want me to worry about this while I was gone. So, 4 days after my initial visit with my doctor, I’m having an MRI done mid-morning on a Friday. Less than four hours later, I get another call from my doctor. The tumor is possibly cancerous, and they want to do a biopsy to see if it’s malignant or not. That’s when I started making calls to cancel my trip to Seattle so that I could stay home and have the biopsy done.

 

As it turns out, I ended up going to the emergency room that night with extreme abdominal pain and nausea and ended up checking into the hospital for a week. During that week, I had a number of tests run, including a PET scan on Wednesday morning. Later Wednesday afternoon, as my wife was coming back from making a quick run to the house to pick up some items, the hospitalist and another nurse came into the room. She asked where my wife was, and when I told her that she’d be back in a few minutes, she said that she had some news but she wanted to tell both of us at the same time. My stomach immediately knotted up. I called my wife to ask where she was, and she told me she was in the elevator and would be in the room in just a couple of minutes. The doctor and nurse left, my wife came in, and a few minutes later the doctor and nurse returned. She said that the results of the PET scan showed that the tumor in my liver was likely cancerous, and the scan showed several other spots in the abdomen that appeared to be cancerous as well. Over the next few minutes, once the sudden wave of nausea passed and I started breathing normally again, I realized that life as I knew it had come to an end, and I was going to enter a new reality and I had no idea what that reality was going to look like.

 

In the eight months since we had that conversation in the hospital, I’ve had major surgery to have a section of my liver removed, and I’ve been undergoing chemotherapy treatments. I’ve been through hell and back, but am finally starting to feel well again, well enough to get back to work full time for the first time in months. That’s why this blog, along with my others, has been very quiet. I’m adapting to my new reality with some difficulty, but the picture is finally starting to get clearer, and things are beginning to look up in their own ways.

 

So how does all of this relate back to the IT industry in general, which is really the focus of this blog? Well, in this industry we’re constantly facing change. Sometimes the change is sudden and life-changing, like going through 9/11 or finding out you have cancer, and sometimes the change is slow and gradual. As I look back over the 25 years I’ve been in this industry, I have a hard time imagining how we were able to provide any level of quality support even 10 years ago without the tools we have now. And in my time in the industry, I’ve gone from being a Novell guy to being a UNIX guy to being a Windows guy to being a Mac guy (well, I’ve kinda mostly always been a Mac guy, it just took me a while to figure that out) to being an SBS guy who is passionate about small business and small business IT in general. Point being, if i had been stubborn and stuck to being a Novell guy or stuck to being a UNIX guy, I’d certainly not be where I am now, the owner of two IT-related businesses and a contributor to a greater SMB IT community.

 

Change happens. How you handle it defines how successful you will be through the transition. As you look around at the changes that are happening in our industry, if you’re taking the stubborn approach of doing the same thing you’ve always done, you’re probably not going to be very successful as the world is going to continue to pass you by.

 

If having cancer has taught me anything, it has taught me that I have to be open to not taking things for granted. I did that for a lot of my life and may have missed out on some opportunities as a result. But no more. I am grateful for every day that I have and look to find things to appreciate about every aspect of my life, work included. And that, for me, has been a significant change…
8 Sep 2011

On Grass-Roots

Author: onq | Filed under: Breckenridge, Pontifications
There’s a wonderful little product from Microsoft that’s been garnering a lot of interest in the SMB space. No, it’s not SBS or even SBS Essentials. It’s the unfortunately-named Windows Storage Server 2008 R2 Essentials, or WSSE as most people have come to start referring to it. During it’s development and beta process, it went by the code name Breckenridge. The product was officially announced back in November on the Windows Storage Server blog (http://blogs.technet.com/b/storageserver/archive/2010/11/08/announcing-windows-storage-server-2008-r2-essentials.aspx). Unfortunately, there’s not been a whole lot of activity since then. Why? Becuase for some strange reason, Microsoft decided that they would only make this product available through the OEM channel and rely on their hardware partners to develop and market the product. A lot of us in the SMB space have been waiting for someone to release WSSE in their product line, but it’s been very slow coming.

 

What is WSSE and why is it of interest in the SMB space? Essentially (no pun intended), WSSE is two core technologies from the Windows Home and Small Business Server product group, those technologies being Remote Web Access and Client Backup, put into an OS that can be domain joined and support up to 25 workstations. It’s like Windows Home Server without the 10 PC limitation and like SBS 2011 Essentials without the Active Directory restrictions. There’s a lot more to the product than that, but those are the two key selling points. For instance, I have a cusotmer who will be moving out of the SBS product and into traditional Windows solutions becuase they have grown too big for SBS. But they are really unhappy about the prospect of losing the Remote Web Access piece becuase a number of their key employees have been using it for years to work from home by connecting to their workstations in the office remotely. They also have a desire to have several of their key business desktops backed up. Well, a WSSE-based solution would be the perfect fit for this customer, as it will back up their workstations, allow them to keep the Remote Web Access technologies they rely on, and still use it in a larger Active Directory network.

 

If only I could find a company that’s making a WSSE device. And from what I’ve seen in mailing lists and blog posts, I’m far from the only one.

 

Well, last week, I ran across an e-mail from an associate pointing out that Highly Reliable Systems, a company we do a lot of business with for backup devices, was developing a Workstation Backup Applicance (WBA) based on WSSE. They had a page up at http://www.high-rely.com/HR3/includes/Smart%20Family/WBA/WBA.php stating that they were working on the product and hoping to release it soon. As soon as I read the page, I called my account manager to get a quote so I could get this device in front of my customer to solve their workstation backup and remote access problem. Then on Tuesday, something unexpected happened – HRS changed the web page for the WBA indicating that due to lack of interest, they were stopping development on the product. I immediately fired off an e-mail to my account manager asking what the heck happened and why they were pulling the product when I had an order that was ready to go. I wasn’t the only one who hit them with the same question.

 

As a result, I had a conversation with the business development manager of Highly Reliable about the product and the actions that transpired behind the scenes. Essentially, they were having trouble confirming that there was enough interest in the product to guarantee enough sales to justify continued development of the device. Somehow, however, even though they thought they had reached out to all of their reseller partners regarding the product to gauge interest, no one had contacted my company to let us know about this appliance and gauge our interest. I had to find out from an unaffiliated source to learn that the product even was being developed.

 

The jist of the conversation is this: Highly Reliable Systems is back in research mode to see if there is enough interest among the SMB marketplace to justify the continued development of this solution. And while they’ve heard stories of people who have sales waiting in the wings if only such a product existed and was available, they’ve heard from very, very few people who actually want to place orders for this product. I offered to help spread the word amongst the community that there is a WSSE device that’s available now (or at least very, very soon) in the US from a company that has a good reputation for providing quality backup hardware. I also agreed to pass along the contact information for the parties at Highly Reliable Systems so that current HR resellers and those who aren’t resellers but would like to be can contact Highly Reliable and let them know specifics about their interest in a WSSE applicance.

 

So, if you have customers who have a need for a WSSE appliance and are ready to place an order, or if you have a projection of potential sales of such a unit if you could get your hands on it next week, please contact Jeff Bowling of Highly Reliable Systems. His phone number is 775-329-5139 x111, and his e-mail address is JB at high-rely dot com. Jeff wants to hear from you if you have an interest in this device. Even if you’re not ready to place an order right this second, he wants to know if you see a potential for this device in your customer space. If you are ready to place an order for this device, then contact him immediately.

 

I’ve seen a lot of e-mails in various mailing lists and posts on various blogs from people who have been crying out for a device based on WSSE. Well, this is your chance to ensure that a quality device becomes available. In other words, it’s time to put up or shut up. I’ll be placing an order for two of these units just as soon as I am able. If you’ve been waiting for a chance to get a WSSE device, this is it. Pick up the phone or fire up your e-mail client and get in touch with Jeff. He definitely wants to hear from you!
17 Feb 2011

On SBS

Author: q | Filed under: eOnCall, SBS 2011, Third Thursday, Webinar
Today (Thursday February 17, 2011) is a busy SBS day for several of my endeavours. We started our first production migration to SBS 2011 for a large client yesterday, both eOnCall episodes today deal with SBS 2011 Essentials (you can listen to them at 10am and 1pm Central time at http://www.apostleradio.org/html/RadioPlayerTUNZ.htm) and Amy and I are presenting on SBS 2011 Standard configuration in the February Third Thursday webinar (see http://www.thirdtier.net/2011/02/managing-sbs-2011-completing-installation/ for access details).
6 Jan 2011

On Support

Author: q | Filed under: Frustrations, Observations, Support

I”ve had two very different cloud support experiences today, and it really highlights the VAST difference in how cloud-based organizations support their customers.


First, the bad. I have several customers signed up for web services through a shared web hosting provider (who shall remain nameless). We ran into a problem this morning that impacted most of those customers, and I went straight to the company web site to start a support incident. I first clicked on their “Live Chat” icon, only to be told that no one was available for chat, but was offered to leave a message. (Ironically, every time I”ve tried to access their “Live Chat” feature in the past 6 months, I”ve yet to have anyone actually be available on the other end.) Next, because this was impacting at least one customer”s ability to conduct business, I called in for support. I”m in Central time, and the hosting provider”s offices are headquartered in Eastern, and it was well after opening business hours here,but no one was available to answer the call. No big deal,if they”re having a major problem, I could see how all their phone support agents were tied up at the time I tried to call. So, I left a voicemail. A few minutes later, however, I recalled that last month when we”d run into an issue, I called and left a voicemail for support and never got a call back. So, I decided to go ahead and fill out an online support request form. I submitted the URLs of all the sites that were impacted, error messages seen in the web browsers, and sent it off on its merry way. Sure enough, almost immediately, I received the obligatory “we received your support request, and here”s the request number.” Good, at least the ticketing system is working.


Just a few minutes later, I received an e-mail from a technician who said they were aware of the issue and working towards a resolution and would let me know when it had been resolved. Great! That”s all I can really hope for at this point. And sure enough, about 30 minutes later, I get an e-mail saying all had been fixed and should be good to go. I checked on a few of the sites impacted, and while some worked, others didn”t. Long story short, I ended up with multiple messages back and forth with their support team. We finally had all sites but one up and functioning, and even though I asked several times what had actually happened and what was done to resolve it, I never got an answer. I finally sent in a long error message for the site that was still not working along with another request for what the problem and solution were, and two things happened. First, a different technician responded to the request. Second, he actually told me what had been the problem and what they had done to fix it. He also said that he was looking into it and would let me know when it was resolved. I ended up Googling the error and found several references to problems resulting from the exact thing they had done to resolve the initial issue, and send links with those items back to them. Over the next several hours, we went back and forth, with the new support guy saying they had fixed the issue, and me replying with updated errors when it clearly wasn”t fixed. Even though this was significantly impacting one of my customer”s ability to work, I never felt a sense of urgency on their part. Ultimately, I think the problem was resolved (at least is hasn”t cropped up since the last set of changed made on their end), but not without a LOT of time on my part and a lot of research that I provided to them on topics that I”m just not that familiar with. Frustrating.


So, imagine my surprise when I got a call from my contact at Inbox Solutions (http://www.inboxsolutions.com), our hosted Exchange provider of choice, a few minutes after I sent in a non-critical request for assistance related to a spam filtering issue. Not only did I get a call, but he had understood my request and provided an immediate workaround to present to my customer having the issue. I wasn”t needing or expecting a response until tomorrow, but because he got back with me so quickly, I was able to implement the workaround with my customer this afternoon and got his issue resolved. This is on par with the level of support that I”ve received since I started working with Inbox Solutions, but it was still a very pleasant surprise after dealing with the other support nightmare of the day.


This is ultimately the crapshoot of dealing with cloud providers. You may have some that can provide excellent support while others don”t have the same focus. As we work with our customers who are contemplating moving some of their business processes into the cloud, this is a significant part of what we look for. Since we can”t put our hands directly on some of the cloud pieces that our customers will be using, it becomes paramount for us to align ourselves with cloud vendors who we can easily work with to resolve issues when they arise. I can honestly say that in the 9 months we”ve been dealing with Inbox Solutions, we”ve received zero customer complaints, we”ve had exactly two support calls (one that we were able to resolve internally and the other which was addressed today), and the response time we”ve received to our queries with them has been phenomenal. In the two years we”ve been working with this web hosting provider, we”ve sen an increase in downtime, longer support response times, and a lack of urgency to our support needs. Guess which vendor we”ll be sticking with and which we”ll transition away from…

6 Jan 2011

On Resolutions/Requests for 2011

Author: q | Filed under: eOnCall

Today”s eOnCall episode covers my thoughts and requests for SMB technology plans for 2011. Listen live at 10am/1pm Central time at http://airtunz.us/rock.html, episodes will be available for download later from http://eoncall.com.

23 Nov 2010

On Drive Extender

Author: q | Filed under: Aurora, Beta, Breckenridge, Breckenridge, Drive Extender, SBS

Today, Microsoft announced the removal of the Drive Extender technology from the “Colorado” product line. That includes the next version of Home Server, SBS 2011 Essentials, and Storage Server 2008 R2 Essentials, all of which are still in beta. The Home Server announcement is at http://windowsteamblog.com/windows/b/windowshomeserver/archive/2010/11/23/windows-home-server-code-name-vail-update.aspx and the SBS 2011 Essentials announcement is at http://blogs.technet.com/b/sbs/archive/2010/11/23/windows-small-business-server-2011-essentials-update.aspx. There are a lot of folks in the Home Server arena who are probably going to be really unhappy about this, as well as some who were looking forward to having Drive Extender in SBS 2011 Essentials (Aurora) and Storage Server 2003 R2 Essentials (Breckenridge). Personally, I”m thinking it”s a good thing for my business and the potential customers we have who will be looking at these products.


For those who are asking “What is Drive Extender and why should I care,” here”s a brief summary. Drive Extender was a disk management technology introduced with Windows Home Server that allowed the total storage on the box to be expanded by adding any size disk of any kind. Wikipedia has a little better description at http://en.wikipedia.org/wiki/Windows_Home_Server#Drive_Extender. On my Home Server at home, I have a pair of 500GB disk drives, a 1TB drive, all connected on the internal SATA controller, and just added a 750GB USB drive,all pooling to have one large storage area available for my music library and my wife”s photo library. Unlike RAID,where all the disks have to be exactly the same geometry and the entire array has to be rearranged when new disks are added, with Drive Extender the newly-added disk can be added to the storage pool at any time and increases the overall storage amount.


While Drive Extender was cool technology for the Home Server market, I had real concerns about it in the Aurora and Breckenridge products. Yes, having the ability to add storage willy-nilly without a concern about the size and type of drive seems nice, but many application vendors refused to support their products on a Drive Extender platform. While logically all the storage appears to be one large single volume, the actual data stored on the drives could literally be anywhere, and possibly in multiple locations. Think about a SQL database where the log files might be stored physically on a SATA-connected drive, but the database files actually resided on external USB storage. As far as the OS was concerned, it was all one large volume, but the performance in that scenario would have been a real mess.


So as I”ve been working with Aurora and planning for how we”ll be rolling out SBS 2011 Essentials implemtnations, I was already making plans to boost the storage in a box that would run Aurora to ensure that some disk was allocated to Drive Extender for storage of some data, but other disk would be excluded from the Drive Extender pool so that I could install LOB application data (i.e., QuickBooks, SharePoint, Kerio Connect, etc.) onto the non-DE storage area. But then how do you protect the non-DE storage area? Put it on a RAID array? Then what”s the value of DE if I”m already putting in some sort of hardware fault-tolerance on the box? And what if a vendor chooses to say “we dont” support our product on SBS 2011 Essentials” because of DE, even though I”ve got their data and/or application installed on non-DE storage? That”s a possible support nightmare I was not looking forward to getting into.


So, today, we know that DE will no longer be a part of the Colorado family, and we”re waiting on updated beta builds of the product that do not have the DE technology implemented. Now I can speak more confidently about what application support will look like on Aurora, because there is no DE to confuse the issue. I can start scoping out “standard” hardware to use as a foundation for an Aurora install. Am I going to have to rethink how I do my next “home server” box at the house? Sure, but I wasn”t sure I was going to make that box run on Home Server anyway, I”m probably doing that one on Aurora. And we don”t sell a lot of Home Server in our business, and pretty much won”t once Aurora and Breckenridge become available.


While I can see how the Home Server folks are going to lament the loss of DE from their product, as cool as it is, removing that technology removes a LOT of roadblocks I was expecting for Aurora and Breckenridge, and that”s good news for my business.

16 Sep 2010

On October 21

Author: q | Filed under: Training

In case you hadn”t heard yet, Third Tier will be doing a pre-day training event in Las Vegas on October 21. Information about the event can be found in this Third Tier Blog Post. At the time of this posting, we”ve already reached 60% of the registration limit for this event. If you haven”t yet, head on over to the Third Tier site and register. Look at the Third Tier Blog for detailed information about the sessions that will be presented.


We”re really excited about this event and hope to see you there!

16 Sep 2010

On September 22

Author: q | Filed under: Kerio, Webinar

I”ll be co-presenting a webinar with Kerio on their Connect mail server product running on Windows Foundation Server. Here are the details for the webinar, including a registration link at the bottom of the post:


Micro Businesses Find Their Edge with Kerio Connect

Wednesday | September 22, 2010 | 10:00 AM PDT

For many small and micro businesses, deploying a full-featured email and calendaring server can be a cost prohibitive endeavor, especially when considering Microsoft SBS or Exchange. But, these organizations have a unique advantage.

One of our industry’s best-kept secrets for organizations under 15 users is to deploy Kerio Connect on Windows Server 2008 R2 Foundation on a decent entry level server – all for less than the cost of the SBS license alone.

Kerio Sales Engineer Brian Carmichael and our special guest and Microsoft SBS MVP Eriq Neale will jointly host this live technical webinar. Don’t miss this session in which Eriq will share some of his personal tips and tricks to properly configure Kerio Connect on a Windows Server 2008 R2 Foundation box with IIS installed for TS Gateway services.

In this webinar Eriq will: 

  • Review the prep work needed on Windows including setting up multiple IPs on the server and getting IIS to listen only on one IP 
  • Install Connect and configure it to listen on the other IP 
  • Install the AD connector 
  • Set up the Outlook connector on a workstation 


Who should attend 

  • IT Solution Providers 
  • Independent Consultants 
  • Small business IT managers, owners and operators 


Presenter
Brian Carmichael, Sales Engineer, Kerio Technologies
Eriq Neale, SBS MVP; Owner, EON Consulting; Partner, Third Tier

Register
https://kerioevents.webex.com/kerioevents/onstage/g.php?t=a&d=666286300 


6 Sep 2010

On Policies

Author: q | Filed under: Group Policy, Pontifications, SBS

So Susan posted her thoughts on how to approach managing Group Policy in SMB environments. In the post, she asked for comments and thoughts, and since I can be a bit wordy and might want to include some content that would be difficult to add in the comment space, it seemed to me like a post on the topic was in order. 


First, a bit of background. I do a LOT with Group Policy. I”ve written the Group Policy chapters (among others) in the SBS 2003 and 2008 Unleashed books. I”ve given numerous user group and conference presentations on Group Policy. I”m certainly no Jeremy Moskowitz, who is pretty well recognized as one of the foremost experts in Group Policy, but I”ve been around the block with GP and seen how powerful it is, and how dangerous it can be. And to set the record straight, I”m pretty much in line with Jeremy”s approach to GP.


My first rule of Group Policy management is simple: NEVER edit the policies named Default Domain Policy or Default Domain Controller Policy. Period. End of discussion. In an Active Directory environment, these are the core policies developed by Microsoft to get a solid, stable AD environment, and mucking around with them can cause issues. Why? Well,because there”s no “undo” in Group Policy editing for starters. If you make a change to one of these objects and it has unintended consequences,like inadvertently locking EVERY object out of the domain, there”s no “undo” button you can click to make things go back to the way they were before. Sure there are ways to “go back” but that involves working with backup software (assuming you can get to the backup software to run it, could be difficult to do if you”re locked out of the domain), volume shadow copy (see previous point), or manually editing the gpt.ini files directly (I”ve done this once, and if I never have to do that again, it will be too soon). Then there”s the issue that if you do go back and end up having to do a restore of a domain prior to a point where you made changes to the default policies, those changes are lost and not easily recoverable (outside of documentation, a point I will address a bit later).


My second rule of Group Policy management is also simple: test your Group Policy changes on a small subset of the domain before releasing it into production. You can”t do that when editing the Default policies. Those policies apply to EVERYONE and EVERYTHING in the domain. 


To address Susan”s first point: “I personally will make a new separate policy when it makes sense to do so.” Well, that”s a *really* loaded statement. Susan is looking at the topic specifically from the viewpoint of SBS and its environment which is pre-configured with a number of Group Policy Objects that don”t exist outside the SBS product arena. In non-SBS land, there are only two Group Policy Objects – the Default Domain Policy and the Default Domain Controller Policy. If you follow the reasoning of my first rule above, you”re not touching either of those objects, in which case it ALWAYS makes sense to make a new separate policy.


OK, that”s a bit nit-picky, but it goes back to perspective. Susan”s post only calls out Group Policy in SMB, not Group Policy on SBS. But knowing Susan and knowing the environment she works in, her post was specifically referring to SBS environments, not the greater SMB environment. Because in the SMB environment, we have tools like Microsoft Windows Server 2008 Foundation Server (henceforth referred to as “Foundation server”) and the new “Aurora” product which, when looking at their Active Directory Group Policy space, only have the two Default policies (repeat point about never, EVER editing the Default policies).


I think that horse has sufficiently been beaten. Now let me address Susan”s post from her perspective – managing and editing policies on an SBS network. And where I take the other side of the coin from her approach, Well, in one specific case.


In the last line of her post, Susan infers that she adheres to an SMB “rule” that “you”d want to have new policies for anything that you add” to the network. So in that regard, Susan and I are of the same mindset. Given my two rules above, never editing the Default policy objects and test Group Policy changes on a small subset of the domain, we”re immediately in the realm of any changes to the Group Policy environment should be in a separate, stand-alone rule. When testing. But what do you do when you”re done testing?


It depends.


If you are setting up a policy that only applies to a certain set of users/workstations/etc., you”re going to have a separate policy for that scenario. In other words, you”re not going to create a policy and put it at the root of the domain and yet somehow set it up so it filters only to certain objects for certain contents of the GPO. Doesn”t work that way. In fact, the SBS development team has, in my opinion, done an outstanding job of keeping the number of Group Policy Objects to a minimum, meaning that they”ve combined similar modifications or changes to a specific subset of the network into a self-contained, clearly-named GPO. And it”s that approach that I follow when working with Group Policy in all of my networks, SBS-based or not. But I also treat the SBS-generated Group Policy Objects as nearly untouchable as the Default Domain Policy and Default Domain Controller Policy (which I believe should never, EVER be edited, by the way).


Why? A few reasons. One – if I modify an SBS-generated GPO, and then re-run whatever wizard actually created or updated that GPO, my changes are lost. Two – if the SBS team releases an Update Rollup that touches Group Policy, those changes could be lost. Three (and this is the one I consider the most important) – if I have to call Microsoft for support on an issue, the technician who takes the call is going to look at the Group Policy environment and assume that it has not been modified from the defaults. And that could be a nearly-fatal assumption depending on the basis for the support call. A corollary to this is if another support organization takes over support of the network – they”re going to assume that the Windows SBS Client Policy (as an example) has the same content as every other SBS 2008 implementation.


So, again, Susan and I are of the same mindset in general. Don”t muck around with the SBS policies. So does that mean that you should create a new GPO for every policy change you want to make on the network? No, not in my book (figuratively speaking). My third rule of Group Policy management is: combine similar policy elements into a single GPO. A corollary to that is: combine elements that apply to the same subset of objects into a single GPO. For example, I”ve had a frequently-accessed post about Disabling SMB Signing in SBS 2003 that provides steps to create a stand-alone GPO for the specific purpose of allowing non-Windows workstations to access SMB shares on an SBS 2003 server (or any domain controller with file shares for that matter). This is a standard object we create on our managed systems in multi-platform environments. Or environments where MFC devices want to be able to scan to a file share on a domain controller. But I digress. That object contains one and only one adjustment to the domain environment and is clearly named as to what it is. Why? Because if someone comes along to provide additional support to the network, or if that customer has to invoke the “bus contingency” (that is, should my entire organization get taken out by a bus or other calamity), someone going in and looking at Group Policy can see immediately that something different has been done and get a good idea what it is without having to open the policy itself. 


If I were to have other domain-level modifications done to a network where we”ve modified Group Policy, I might take the approach of including those other changes in the same GPO, then changing the name from “SMB Signing Disabled” to “Domain Policy Modifications” or something equally clear that it”s not one of the default policies. Why? Because it is possible to bog down a workstation during the boot or logon process if there are too many policies to process. How many is too many? That depends on the speed of the server (as the policy files have to be read off the server disk at some point), the speed of the network (so the workstation can pull the policy objects from the server), and the speed of the workstation (”nuff said), but it can happen. A typical SBS network already has a number of GPOs created, but honestly, in a typical SBS deployment, adding another 4-6 policy objects isn”t going to significantly impact the performance of the server or the workstations. So, if I were adding two changes to the domain, disabling SMB signing and some other domain-level task, I would still probably create those as two separate policies and not combine them, again primarily for clarity sake. Even as well as we document the networks we support, who knows if that information is going to find its way into the right hands if we suddenly become unable to provide support for the network.


Susan says that the one exception she will make to the rule of not editing the SBS-generated GPOs is when she wants to make changes to the Windows Firewall settings on all workstations. She says if she thinks a firewall exception should have been a default (i.e., if she thinks the SBS development team failed to include a critical element in the firewall), she will put it into the default (and correct) firewall GPO. In Susan”s environment where she is providing internal support for her own network and she fully controls the entire environment and is not likely to ever turn support over to an external support organization, that”s within her right. I disagree with her philosophically on that point, but in her environment, it might make the most business sense. But in my opinion, for IT support organizations, even adding a firewall rule that you think should have been included with the product by default should really go into a separate GPO, if for no other reason that when someone other than you looks at the box the can see relatively quickly what has been done to GP. Is it a high risk that adding an exception to the firewall rule is going to take down the network? No, it”s actually a fairly low-risk endeavor from a network impact point of view. But so is putting those changes into a separate OU. 


Susan says that she”s concerned about workstations processing more than one GPO that has firewall rule elements in it. There”s no need for concern from a performance standpoint. The amount of time it takes to process multiple firewall rules in a single GPO isn”t significantly faster than processing multiple firewall rules in multiple GPOs. She also says that she thinks it puts the workstation at greater risk by her not creating a rule set properly. Susan, dear, if you”re not going to build a rule set properly, it doesn”t matter if it”s in a stand-alone GPO or in the SBS-generated GPO. If you”re going to screw up a rule set, you”re going to screw up a rule set. And honestly, I think that there”s a greater risk of impacting ALL the firewall rules for the domain if you somehow really screw up adding a rule set to the existing GPO than if you create a standalone rule. 


Which brings me to my final point about creating GPOs as separate objects and not rolling them in with the default objects. Let”s say that you did create a bad firewall rule and it actually negatively impacted workstations on the network. It”s far easier to disable the GPO that has the custom firewall rule to “fix” the problem than to go back in and edit the GPO to remove the offending element. And if you screwed up the rule when you put it in, what”s to keep you from screwing it up when you try to edit or remove it from the existing GPO?


Here”s how I approach GPO management in a nutshell (yes, perhaps it is about damn time, but I wanted to get the background out there):


1. I always create a new GPO and tie it to a specific set of workstations or users for testing, either through OU assignment or security restrictions. 


2. After I”ve tested the GPO and confirmed that it works, I”ll either remove the restrictions or OU assignment so that it can be applied to the full set of objects it should be applied to.


3. If, after testing, it makes sense for the GPO element to be included as part of another custom GPO that has already been tested and implemented, I”ll edit that existing object, knowing full well that I can quickly disable that custom object if problems arise. 


So this treatise on GPO isn”t going to change Susan”s mind on how she approaches doing firewall-related GPO operations for her organization. I maintain that it”s better to create a separate GPO for firewall adjustments on an SBS network (not one for each firewall change, mind you, unless you have specific firewall rules for specific workstations) than to edit the SBS firewall objects. It doesn”t create performance problems on the network or workstation, it falls in with the rest of the approach of doing separate GPOs for custom settings, and it”s easy to turn off the rule if something does go wrong in the deployment of the object.

24 Aug 2010

Meet Aurora (1 of ??)

Author: q | Filed under: Aurora
Now that the public beta of Aurora is out and in the wild, we can finally talk turkey about the product and what it does and doesn”t do. To that end, I”m starting a series of posts to introduce people to Aurora who might not otherwise be able to look at the product. My reasons for doing this (given that there are a lot of other folks who are also blogging/writing about the product) are multifold. First, back in the SBS 2008 pre-release days, I got up on my soapbox and told everyone who would listen that they needed to take a long hard look at SBS 2008 because it was significantly different from SBS 2003. Based on the types of issues I”m still helping IT Pros get through with SBS 2008, there are a LOT of people who didn”t do this. Well, Aurora is completely different from anything you”ve seen in the SBS product space before, and as such, there are some misconceptions and false assumptions I hope I can stamp out early on through these posts. Second, there are some things about the defaults in Aurora that I think need to be tweaked that I doubt very seriously will make it into the final product release build, so I”ll be documenting some of those tweaks here as we go through the series. Third, there are some,well,*different* things I”ll be doing with Aurora, and I want to have a place to highlight some of those unusual configurations someplace, especially if a few of these zany ideas make sense to other IT Pros who want to use them in their own deployments. Finally, I think that Aurora is going to be a huge player in the under-25 employee business, and the sooner consultants and businesses learn about what it can (and cannot!) do, the better!

So, with that introduction, let”s get started. If you haven”t already, I HIGHLY RECOMMEND that you start learning about the product on your own. You can get some overview information from the SBS Blog post from Michael Leworthy. That post includes links to several resources, including an overview video, that make for a good introduction. I”d also recommend, if you haven”t already, that you read the Aurora Beta Announcement on the SBS Blog and go sign up for the beta of the product so you can get your hands on the bits now. 

While you”re waiting for the bits to download, let”s take a quick tour of an out-of-the-box basic install of Aurora in a test environment. From the basic desktop screen, you can see that this is NOT your typical SBS.

 

In fact, if you”ve seen Windows Home Server, it should look really familiar to you (especially if you”ve been in on the Vail beta). That”s because Aurora is built on the same codebase as the next version of Windows Home Server, codenamed Vail. We”ll get more into the similarities in later posts, but for now, let”s mention the one key difference between Vail and Aurora, and that”s Active Directory.



As you”ll see in the above image, when looking at the list of services running on our unmodified Aurora install, there are Active Directory services running on this server. These are set up as part of the Aurora install and are present by default (i.e., you cannot choose whether to install Active Directory or not). Part of the licensing restriction for Aurora is that it must run Active Directory, and it must be the root domain holder for the network (very similar to the licensing restrictions for SBS, and the reason why you cannot have Aurora and SBS in the same domain).



One other key service of Active Directory is DNS, and as you can see in the above shot of the second page of services on Aurora, the DNS server service is installed and running. Again this is done as part of setup and is not configurable. Active Directory relies heavily on DNS, so it”s good to have the service there and pre-configured as part of the setup.

If you look carefully at the list of services in that second screenshot, however, you may notice something missing (if you”re used to the typical SBS installation). That”s right, there”s no DHCP Server service listed. Aurora does not preinstall or preconfigure DHCP services for the network. The default assumption with Aurora is that some other device on your network, perhaps the Internet Router, is providing DHCP for the network. This is one area where I disagree with the default configuration of Aurora out of the box. I firmly believe that DHCP should be installed and running on Aurora so that proper AD information can be handed out to workstations participating in the Aurora network, such as the default internal domain name and the IP address of the Aurora box as the primary DNS server for the workstations. Anyone who has run across domain-joined workstations that do not point to a domain-enbled DNS server knows that the Active DIrectory performance of the workstation leaves a great deal to be desired. Fortunately, the DHCP service can be installed on Aurora, and I have it on good authority that steps for doing so will be included on independent Aurora build docs that are being developed right now.

Next, let”s take a look at the Active Directory environment that is configured with Aurora. Below is a capture of the Active DIrectory Users and Computers console showing the AD defaults for Aurora.



Again, anyone familiar with SBS will notice significant differences in the SBS AD configuration and the Aurora AD configuration. The Aurora AD configuration is the same as what you would get installing Active Directory on a standard Windows Server 2008 box. No custom OUs, user accounts placed in the Users containers, and so on. This isn”t necessarily a bad thing – Foundation Server does the same when AD is installed (not installed by default). But it *is* different from SBS, and that”s something that IT Pros need to be aware of. This configuration has significant impacts on how Group Policy will be applied, but we”ll dive into Group Policy on Aurora in more detail in a later post.

Since we did mention Group Policy, however, let”s take a quick peek at the Group Policy configuration in our out-of-the-box Aurora install:



When you look in the Group Policy Management Console, you”ll see that the only GPOs listed for the domain are the Default Domain Policy and the Default Domain Controllers Policy. That”s it. Again, this is exactly what you”d expect from a traditional Active Directory installation, but NOT from an SBS installation. SBS has used Group Policy heavily in its configuration since SBS 2003, but that is not the case in the default Aurora install.

So in this quick look at Aurora, I hope you”ve seen that Aurora is NOT the “next version of SBS” as some media outlets have claimed that it is. It”s going to be an interesting hybrid of Home Server and Foundation Server, but it is NOT a derivative of the traditional SBS product line. While not all of the details regarding Aurora have been finalized or made public yet (i.e., pricing, licensing, additional restrictions, etc.), I still think that this is going to be a great platform to build on for the 1-20 employee business. I”m already making plans to “upgrade” several of our customers from SBS 2003 to Aurora (and I”ll cover more about how I plan to approach that move from a technology standpoint in later posts in this series) once the product is released, and see the potential of this product with other clients that we haven”t had a good solution for up until now. But as different as this product is, the typical SBS consultant will need to rethink the way they approach ongoing maintenance for this solution, and the best way to devise those plans is to start working with the product NOW to see what you”re really up against. Some of the tools or processes you”ve been using for years simply may not work the same way on Aurora as they do your other supported devices, and you really don”t want to figure that out AFTER you”ve deployed this to a customer.

Bottom line, we”re sold on Aurora, and think you will be, too.