Disable WiFi with PowerShell

A bit like putting your laptop into Flight Mode, I guess.

I fly fairly often at the moment, but my Wireless switch on the side of my laptop is slowly wearing out. Occasionally when it’s in the OFF position, it thinks it’s actually in the ON position for a moment, and I see the WiFi LED come on. It’s probably a warranty item which I should get fixed soon. But until I do, I’m in the habit of opening up Computer Manager and disabling the device.

So I got to thinking that I should be able to do this with PowerShell, and make myself a “FlightMode” script.

I started by figuring that there must be a WMI class that described the network adapters. So I ran:

get-wmiobject -list | select-string adapter

Sure enough, one of them was Win32_NetworkAdapter. So now I could identify the network adapters available.

get-wmiobject -class win32_networkadapter -namespace root\CIMV2

Great. Plenty of them, but in particular, one that was my Wireless one. I could grab that alone by running:

get-wmiobject -class win32_networkadapter -namespace root\CIMV2 | where-object {$_.Name -match “Wireless”}

Now to find out out how to disable or enable it. Easy… pipe my object into get-member.

get-wmiobject -class win32_networkadapter -namespace root\CIMV2 | where-object {$_.Name -match “Wireless”} | get-member

Seeing Enable() and Disable() methods makes this easy…

get-wmiobject -class win32_networkadapter -namespace root\CIMV2 | where-object {$_.Name -match “Wireless”} | % {$_.Disable()}

get-wmiobject -class win32_networkadapter -namespace root\CIMV2 | where-object {$_.Name -match “Wireless”} | % {$_.Enable()}

I’m looking for anything that matches “Wireless”, just in case. I really don’t want to be on a plane and accidentally using a WiFi network adapter. Next I need to work out a script which will switch my timezone.

The problem with America – spatial data

Ok, it’s not really a problem with America, just a problem with the perception of straight lines like the USA-Canada border. It’s roughly the latitude line 49 degrees north. A line that goes East-West, on a straight line.

Except that latitude lines aren’t straight – except the equator. Just look on a globe and you’ll see what I mean.The curvature of the earth just gets in the way a bit. If you’re ten miles from the North Pole and travel east until you come back on yourself, you’ve just gone around in a circle that’s 20 miles across. The smaller the circle, the more obvious the effect. The bigger the circle (like when you’re near the equator), the less obvious the effect. But it’s still there.

Pilots understand the principle. When I flew from London to Seattle earlier this year, we went across the top of Greenland, because that’s the quickest way of getting there. Pick up a globe of the world, and you’ll see exactly why we should fly that way.

When someone wants to fly from one point on the 49th parallel to another, or to simply draw a line, the shortest route actually goes a little further north. The 49th parallel is like that 20-mile circle – it bends to match the curvature of the Earth.

So if you’re mapping the world using spatial data for SQL Server 2008, you may want to consider this when looking at the American border. It’s not a straight line along the 49th parallel, because the 49th parallel isn’t a straight line.

image

To demonstrate this, my friend Greg Larsen (who runs the Olympic Area SQL Server User Group) has sent me a query. You’ll notice that there are some locations north of the 49th parallel that are with the polygon which seems to be bound by the 49th parallel.

USE AdventureWorks2008;
GO
DECLARE @region geography;
SET @region = geography::STGeomFromText(‘POLYGON((-67 49, -125 49, -125 24, -67 24, -67 49))’, 4326);

SELECT  [AddressLine1]
       ,[City]
       ,[SpatialLocation].Long AS Long
       ,[SpatialLocation].Lat AS Lat
FROM [AdventureWorks2008].[Person].[Address]
WHERE [SpatialLocation].STIntersects(@region) = 1
and [SpatialLocation].Lat > 49;

This query actually gives us 632 rows. That’s 632 addresses in the sample database which are more than 49 degrees North, and yet are within a polygon which has corners at -67 49 and -125 49. Hopefully you can look at the picture of the US border and see where those 632 rows much be.

SQL Server spatial stuff is really clever. But perhaps we need a way of telling it to draw lines east-west, instead of as the crow flies.

Ordered data requires ORDER BY

When you select data out of a database table, it might seem as if the system will give you data in the order of the clustered index that is on that table (assuming it has one), but this isn’t quite right. I showed this in the fifth of my T-SQL tips at TechEd Australia this year.

If you don’t explicitly order data using the ORDER BY clause, the system makes no attempt to provide you with the data in any particular order. It will just give you the data in the fastest way possible. You might be lucky and get the data in a predictable order most of the time. In fact, on your system, the data might be returned in a predictable order over 99% of the time. Unfortunately, you can’t rely on that.

When the database engine is handling your query, it needs to get your data into RAM first. If it’s not already there, this means a trip to disk, which will generally be the slowest aspect of your query. With multiple processors and multiple disks being the norm these days, there is a good chance that one processor will head off to get the data from one disk, while another processor gets data from another. If your query isn’t explicitly stating the order for your data, whichever processor gets its data first is likely to determine which data appears first in your results. There are other reasons why your data might come back in an unexpected order, but this reason alone should convince you.

I’m sure reading this, you’re very much used to putting ORDER BY in your queries. But what happens if you’re using SQL Server Integration Services (SSIS)? When you set up a data flow and want to pull data out of a particular table or view, do you use the “Table or View” option in the data source? You might – it’s a lot easier than typing your query out if you choose the SQL Command option. Just don’t. Use the SQL Command option instead.

image

The problem is that SSIS can really take advantage of ordered data. If SSIS can’t guarantee that the data is ordered, it will assume it’s not, and this might hurt the performance of your package significantly. Some data flow transformations can’t even run on unordered data. You can tell a data flow source that the data is ordered. You can even tell it that it is, even if it’s not. Don’t do this. Only tell a data flow source that it’s ordered if you’re explicitly ordering it. If you tell SSIS that a data flow is ordered, and the data comes through in a different order (because you’re unlucky one time), you will get unexpected results. What I mean by ‘unexpected’ here is ‘wrong’.

It’s not hard, just write out your query, and use the ORDER BY clause. You know you should…

Unique Indexes with GROUP BY

Indexes are great, but if you don’t understand the significance of unique indexes, then you’re potentially missing out on some decent performance gains.

I’ve been meaning to write this blog post for a long time. This is material that I’ve been teaching and presenting about all year, but somehow it’s never turned into a blog post. It was the third tip in my presentation at TechEd Australia (and at user group events in Adelaide and Melbourne). Today I’m flying back home from the US, so hopefully I’ll be able to spend some battery time getting it done. [Note: It’s ended up out of order with another post that I wrote on the plane. This one is Tip #3.]

This query runs in the AdventureWorks database, in almost any version of SQL Server you have. When I present on this at the moment, I use SQL Server 2005, so that people don’t think I’m just showing new SQL Server 2008 features. This concept will help you regardless of version.

SELECT s.Name, COUNT(*)
FROM Production.ProductSubcategory s
  JOIN
  Production.Product p
  ON p.ProductSubcategoryID = s.ProductSubcategoryID
GROUP BY s.Name;

It’s a simple query that counts the number of products in each non-empty subcategory. Having seen the logical action of the query, let’s look at the execution plan. This will show us what the query is actually doing. We’ll look at it both graphically and in text. The graphical representation is stored as XML, and additional information can be seen using the tool tips and the Properties window.

 

|–Merge Join(Inner Join, MERGE:([p].[ProductSubcategoryID])=([s].[ProductSubcategoryID]), RESIDUAL:([AdventureWorks].[Production].[ProductSubcategory].[ProductSubcategoryID] as [s].[ProductSubcategoryID]=[AdventureWorks].[Production].[Product].[ProductSubcategoryID] as [p].[ProductSubcategoryID]))
     |–Compute Scalar(DEFINE:([Expr1004]=CONVERT_IMPLICIT(int,[Expr1016],0)))
     |    |–Stream Aggregate(GROUP BY:([p].[ProductSubcategoryID]) DEFINE:([Expr1016]=Count(*)))
     |         |–Sort(ORDER BY:([p].[ProductSubcategoryID] ASC))
     |              |–Clustered Index Scan(OBJECT:([AdventureWorks].[Production].[Product].[PK_Product_ProductID] AS [p]))
     |–Clustered Index Scan(OBJECT:([AdventureWorks].[Production].[ProductSubcategory].[PK_ProductSubcategory_ProductSubcategoryID] AS [s]), ORDERED FORWARD)

 

The Stream Aggregate operator is where the GROUP BY work is done. Having sorted the data, the system can look through it for changes, calculating whatever aggregates are needed at the time. The properties window or the text view show clearly that the count is being calculated here.

And they show that the field being grouped by is p.ProductSubcategoryID. But our query said we should group by s.Name. So what’s going on? It’s not picking the right field. It’s not even picking the right table.

But our system has a unique index on s.Name. GROUP BY looks for the distinct values from the fields in the GROUP BY clause, but if a unique index is on one of those fields, every row in that table is already known to be distinct. Our unique index is letting the system have some more freedom.

Because s.ProductSubcategoryID is also known to be unique (it happens to be the Primary Key (PK) of the table), it is logically equivalent to consider this field in place of s.Name. And as we are joining to the Product table on this field, it is even okay to group by the field from Product. The s.Name field is fetched later and brought into our result set using the Merge Join operator. It’s not hard to recognise that our query is asking for the number of products per subcategory, and that we’re only really grouping by s.Name because that’s the field that we want to display.

Now let’s consider what happens if we remove the unique index on s.Name. Interestingly, if you have been evaluating your indexes using sys.dm_db_index_usage_stats, you may consider that the index isn’t being used. Unfortunately, the fact that the query optimizer does actually use the query isn’t reported very obviously. Once the index is removed, the query plan changes. This demonstrates that the index is definitely being used, even if it’s not used in a way that gets reflected in sys.dm_db_index_usage_stats.

|–Compute Scalar(DEFINE:([Expr1004]=CONVERT_IMPLICIT(int,[Expr1007],0)))
     |–Stream Aggregate(GROUP BY:([s].[Name]) DEFINE:([Expr1007]=Count(*)))
          |–Sort(ORDER BY:([s].[Name] ASC))
               |–Hash Match(Inner Join, HASH:([s].[ProductSubcategoryID])=([p].[ProductSubcategoryID]), RESIDUAL:([AdventureWorks].[Production].[ProductSubcategory].[ProductSubcategoryID] as [s].[ProductSubcategoryID]=[AdventureWorks].[Production].[Product].[ProductSubcategoryID] as [p].[ProductSubcategoryID]))
                    |–Clustered Index Scan(OBJECT:([AdventureWorks].[Production].[ProductSubcategory].[PK_ProductSubcategory_ProductSubcategoryID] AS [s]))
                    |–Clustered Index Scan(OBJECT:([AdventureWorks].[Production].[Product].[PK_Product_ProductID] AS [p]))

It has clearly changed a lot. Now, the Stream Aggregate operator reports that it is grouping by s.Name – just like we asked. But you may also notice that the Estimated Subtree Cost is about 30% worse than the previous query. The system is treating this query in exactly the way that we asked – reporting the number of products per subcategory name, rather than per subcategory.

We can address this without the unique index, if we change the query. The answer is to simply include the Primary Key of the subcategory in the GROUP BY clause. Even though we don’t want to include that field in the result set, listing it in the GROUP BY clause will let the system optimise the query better even without the unique index. Bear in mind that if there is a duplicate subcategory name, there will be an extra row that didn’t appear before – but this is correct we’re grouping by subcategory rather than subcategory name. Having s.Name in the GROUP BY clause is ignored (as in our original plan) – it’s only listed there because we want to list it in the SELECT clause.

SELECT s.Name, COUNT(*)
FROM Production.ProductSubcategory s
  JOIN
  Production.Product p
  ON p.ProductSubcategoryID = s.ProductSubcategoryID
GROUP BY s.Name, s.ProductSubcategoryID;

The unique index simply allows the PK to be implicitly inserted into the GROUP BY clause.

As a query developer, you should ask yourself if you’re wanting to group by the entity described by the table or just a particular column. If you’re really wanting it to be entity, then include the PK in the GROUP BY clause, or else understand the impact of the unique index. And definitely appreciate that there may be queries which are helped by the existence of unique indexes.

I do think that it’s worth evaluating index usage. Having an unused index in your system is like having the Yellow Pages in your house if you never use it – just taking up cupboard space. Dropping an unused index may not work for you though, particularly if it’s unique.

JOIN simplification in SQL Server

This is another of my tips/techniques that I demonstrated recently. It is not a new SQL Server 2008 feature – it works just fine in older versions – I demonstrated it running on SQL Server 2005 but using SQL Server 2008 Management Studio. [Note: It’s ended up appearing earlier than another post that I wrote on the plane. This one is Tip #4.]

When you have a single-table query and then introduce a JOIN, there are a number of ways in which that JOIN impacts your query. I want to talk about four such JOIN effects that I’ve identified. Understanding these four effects will help you take advantage of the simplification I am about to show you. Don’t focus on the simplification, focus on the effects. I will show you later how the concept can be applied to help database developers.

1. Extra columns

This seems fairly straight forward – we have access to additional columns (the one in the new table). It’s the main reason we tend to join to other tables – we simply don’t have all the data in our original table.

2. Duplicated rows

Not necessarily duplicated completely, but certainly a row in our original table may appear multiple times in our result set, if it matches with multiple rows in the new table. Notice that this doesn’t occur if the join-fields (those used in the ON clause) in the new table are known to be unique (as is the case with a Foreign-Key lookup).

3. Eliminated rows

In a similar manner, if a row in our original table doesn’t match with any rows in the new table, it might get eliminated from the results. Notice that this doesn’t occur if a LEFT JOIN or FULL JOIN is being used.

4. Added NULLs

This effect is somewhat rarer, but still worth mentioning. If a RIGHT JOIN or FULL JOIN is being used, then there may be NULLs added to our result set to reflect rows in our new table that don’t have matches in our original one. This doesn’t happen unless a RIGHT JOIN or FULL JOIN is being used.

Let’s examine some queries. They all use the AdventureWorks sample database.

SELECT p.ProductID, p.Name, p.Color, p.Size
FROM Production.Product p;

This is our starting place. Our control query if you like. Its execution plan is very simple – a clustered index scan on the Product table. There are no other tables that need to be involved. It returns 504 rows.

Now let’s involve another table – Production.ProductSubcategory.

SELECT p.ProductID, p.Name, p.Color, p.Size, s.Name as SubcatName
FROM Production.Product p
  JOIN
  Production.ProductSubcategory s
  ON p.ProductSubcategoryID = s.ProductSubcategoryID;

This is a fairly standard lookup query. It’s essentially still about the Product table, but we’re hooking into the Subcategory table to be able to fetch some more information. Clearly we want effect #1 to take place.

But what about the other effects?

We don’t have duplicated rows, as s.ProductSubcategoryID is known to be unique (it’s the Primary Key).

We’re not introducing NULLs on the Product table side of the query, as we’re not using RIGHT JOIN or FULL JOIN.

And because this is a foreign key relationship, our constraint should stop us from eliminating rows. Only valid values can be used in p.ProductSubcategoryID – but this field can take NULLs, so in fact, the Eliminated Rows effect is taking place. Looking at the results of the query show that only 295 rows are returned. This can be avoided by using a LEFT JOIN instead of the INNER JOIN.

SELECT p.ProductID, p.Name, p.Color, p.Size, s.Name as SubcatName
FROM Production.Product p
  LEFT JOIN
  Production.ProductSubcategory s
  ON p.ProductSubcategoryID = s.ProductSubcategoryID;

Now we are only seeing the first JOIN effect. As expected, the execution plan involves indexes on both queries.

But look what happens if you remove s.Name from the query.

SELECT p.ProductID, p.Name, p.Color, p.Size
FROM Production.Product p
  LEFT JOIN
  Production.ProductSubcategory s
  ON p.ProductSubcategoryID = s.ProductSubcategoryID;

Now, none of our four effects are coming into play. And when we look at the execution plan, we see something surprising.

The query has simplified back down to to a single-table query. The query optimizer has deemed that the JOIN wasn’t having any effect on the query and therefore removed it.

At the moment, this may seem quite academic, but this can be leveraged to make life a lot easier for the average database developer.

It turns out that about half the time I need to refer to Products, I need to perform a bunch of lookups. I need to look up ProductType information, ProductSubcategory information, ProductCategory information, and so on. But each of these lookups makes my query-writing a little slower. I might be quite quick at writing these queries, but my queries typically have ten lines or more that are just providing the lookup logic.

It would be so much nicer to have a view called ProductsPlus, which was a slightly denormalised version of the Product table – including all my lookup information.

The problem is that without an understanding of the four join-effects, the system is probably going to be performing all the lookups every time you use the ProductsPlus view, even if you’re not interested in all the lookup columns. Examining the join-effects, we can eliminate them, just like in the previous example.

Let’s consider the contents of our ProductsPlus view. We have foreign key relationships in place to assist. All the FK columns can take NULL, so we are using LEFT JOIN. If they didn’t allow NULL, INNER JOIN would be okay.

CREATE VIEW dbo.ProductsPlus
AS
SELECT p.ProductID, p.Name, p.Color, p.Size, s.Name AS SubcatName, c.Name AS CatName, m.Name AS ModelName, su.Name AS SizeUnit
FROM Production.Product p
  LEFT JOIN
  Production.ProductSubcategory s
  ON p.ProductSubcategoryID = s.ProductSubcategoryID
  LEFT JOIN
  Production.ProductCategory c
  ON s.ProductCategoryID = c.ProductCategoryID
  LEFT JOIN
  Production.ProductModel m
  ON m.ProductModelID = p.ProductModelID
  LEFT JOIN
  Production.UnitMeasure su
  ON su.UnitMeasureCode = p.SizeUnitMeasureCode
;

When I query this view using SELECT * FROM dbo.ProductsPlus, I get all the tables involved, of course.

image

But because we took care when creating the view to avoid effects #2, #3 and #4, when we only need certain columns, the other tables are completely ignored.

Now your SELECT queries can be much shorter and easier to read, without involving tables needlessly.