Useful SQL Question and Answer sites

There are so many places to ask a question these days. I get plenty of questions via MSN Msgr and email, and do my best to answer those of course. But there are many others too. I figured I’d list some of the ones that I frequent, and challenge some of the readers here to check some of them out.

The MSDN Forums are terrific. Lots of really good people hang out there, including many Microsoft staff. They’re effectively the new version of the public newsgroups. It’s definitely worth asking (and answering) questions here, and I should probably choose this option more for answering questions myself.

Experts-Exchange is a much-maligned site, largely because to ask questions you need to have points. You can get points through a paid subscription, but you can also get points by answering questions. If you answer just a few questions each month, you can become a recognised Expert on the site, which lets you ask as many questions as you like, and also gives you the option of a “ad-free” environment. Many people still joke about the way that the site reads if you ignore the hyphen, but if you are an expert, this site is definitely worth hanging out on. You can register for free (getting you no points to ask questions until you’ve started answering them) at, so why not go there and register, so that you can start answering questions. They have a facility so that Designated Experts can get emails for neglected questions, giving you a much better chance of an answer than many other sites around. (Note – if you are a SQL MVP, or a MS employee, and you want to be fast-tracked into receiving the Neglected Questions notices, drop me a line and I’ll see what I can do for you)

Stack Overflow is a current favourite amongst many, because of the number of people that seem to frequent the site. It’s clean (very few adverts hanging around), and people seem to rush to answer questions as soon as possible. From a purely SQL perspective, I find that there is too much weighting on the iterative languages there, so many of the SQL responses seem to be provided by people who aren’t really SQL specialists. But it doesn’t mean that you won’t pick up some good tips there. I got started there by answering a question that has even ended up in the source for the site – which I’m still hoping will reach the magical “100 up-votes”, and I’ve continued to keep my eye out for questions there that need answering.

Server Fault is the system administrator cousin to Stack Overflow. If you have DBA-style questions rather than developer-style, then this site is very useful.

Using the same interface as Stack Overflow and Server Fault, but purely focussed on SQL Server is Ask SCC, run by the guys from It’s a new players on the scene, but I think will turn into a very useful site. The Stack Overflow engine isn’t bad at all, and the quality of answer at Ask SCC is excellent. I would love to see more people hang out there, as it serves a useful market for SQL specialists. At the moment it doesn’t do much traffic, but many of the people there are good SQL experts, and I’m convinced that you’ll get an excellent answer if you ask a question there. At the moment it doesn’t seem to be collecting poor answers as much as many of the other sites, so the ratio of good answers to poor ones puts you in a good position as an asker. I’ve posted my Ask SCC and Stack Overflow ‘flairs’ here, so that you can compare the two. If the numbers on the Ask SCC one have reached as high as the Stack Overflow one, then you’ll have a good indication that the traffic on Ask SCC has increased nicely.


In many ways, I tend to find that my efforts are focussed more on the questions that aren’t getting answered, rather than trying to catch the newest questions. On many of these sites, I’d rather find the one that the asker has had trouble with, hoping to provide the elusive answer rather than the obvious one. That question that got me started on Stack Overflow was an exception because I didn’t feel like any of the previous answers had really solved the question properly, but on the whole, my approach to Stack Overflow doesn’t really fit with most of the answerers on the site. I like EE because there really seems to be a focus on getting those elusive answers for people, and I know that Microsoft really focuses on getting answered questions sorted on their forums.

My challenge to you is to give back to the community this Christmas. Make it a resolution for 2010 if you will. Why not try to answer a question every week? And better still, make it one that everyone else has had trouble answering. Go to the lists of unanswered questions, and help someone out. Next time it might be you asking, and you’ll hope that someone takes the time to find your elusive question.

Plus, you might learn something!

Infinite Drill-through in a single SSRS report

Grant Paisley of Angry Koala and Report Surfer put me onto this a while back, and I have to admit I’m a bit of a fan. The idea comes from the fact the way that SQL Server Reporting Services (both 2005 and 2008) handles parameters with Analysis Services, and lets you make a report that drills through into itself, deeper and deeper into a hierarchy. Today I did a talk at the Adelaide SQL Server User Group, and mentioned this was possible (but didn’t have the time to demonstrate it properly).

If you make a parameterized query in an MDX query in SSRS, you use the STRTOMEMBER or STRTOSET function to handle this. But the MDX has no other indication of what dimension, hierarchy or level is being passed in. If you grab the children of whatever you’ve passed in, you can easily put this on the Rows axis and get one level down. Passing the UniqueName of whatever you’ve just provided back in as the next parameter, and you have infinite drill-through.

Look at the following MDX query:

MEMBER [Measures].[NextLevel] as StrToMember(@SomeDate).Hierarchy.CurrentMember.UniqueName
MEMBER [Measures].[NextLevel Name] as StrToMember(@SomeDate).Hierarchy.CurrentMember.Member_Name

NON EMPTY { [Measures].[Internet Sales Amount], [Measures.NextLevel], [Measures].[NextLevel Name] } ON COLUMNS,
NON EMPTY { (StrToMember(@SomeDate).Children ) } ON ROWS

FROM [Adventure Works]

You see that I provide the UniqueName and Member_Name properties in known columns, so that I can easily reference them in my report. You’ll also notice that nowhere do I actually indicate which dimension I’m planning to drill down on, or to which hierarchy the @SomeDate parameter refers. I have suggested it’s a date, but only in name. At this point I also make sure that the Report Parameter is not restricted to values from a particular query, and I hide it from the user. I’m going to be passing in UniqueName values, which aren’t particular user-friendly.

If I start with [Date].[Fiscal].[Fiscal Year].&[2003], then my NextLevels will be [Date].[Fiscal].[Fiscal Semester].&[2003]&[1] and [Date].[Fiscal].[Fiscal Semester].&[2003]&[2]. This then continues down as far as I want it to go. I could always put a condition on my Action to pick up when there are no more levels, and potentially start down a different hierarchy. After all, I can always use a bunch of other parameters in the WHERE clause to slice the cube in other ways first, for placeholders. It really just comes down to MDX creativity to investigate different ways of drilling through the data.

Please bear in mind that other people may well have achieved the same sort of thing using a different query – I’m just posting what has worked for me. Hopefully by doing this, you can avoid making five drill-through reports just because your hierarchy has five levels. This might just remove 80% of your reporting effort!

High ROI items for SQL Server 2008

To persuade your boss to embrace an upgrade to SQL 2008, you need to know which features have high Return On Investment. They may have seen presentations talking about features like Spatial, or MERGE (and been quite impressed), but they may well have left those presentations thinking about the effort that’s would be involved in rewriting applications to take advantage of these features. It’s all well and good to see your customers on a map, but someone has to make that spatial data appear somewhere.

This post is a callout for features that will benefit you (and your boss) as soon as you do the upgrade (or soon after). And I welcome comments to list other items as well.

  • Block Computation (in SSAS – which reduces the effort in processing significantly, for no change in the application )
  • Transparent Data Encryption (in the Database Engine – which makes sure that data at rest is encrypted, with no change in the application)
  • Backup Compression (which reduces the size of backups, and can be set as the default so that existing backup scripts don’t need to change)
  • Data Compression (minimal change to turn on compression on tables which will compress nicely)
  • Filtered Indexes (because how far off is your next index creation, really?)
  • Auditing & Change Data Tracking (because it’s very easy to turn on and then review the data as you need it)
  • Export to Word in SSRS (because everyone’s wanted this for so long)
  • SSRS paging (because SSRS used to get _all_ the data for a report before rendering it – but not in 2008)
  • Resource Governor (easy to set up, nice to have in place for when you might want it)
  • Hot-add memory (so that you can just plug in more memory without having to do restarts)

I’m not suggesting that an upgrade should be done flippantly. You should still consider the effort of thoroughly testing your system under SQL 2008. But hopefully this list can highlight some of the things that I’ve found are good persuaders. A list of “What’s New in SQL 2008” can be found at

Like I said, you may have other items on your own list, and I invite you to comment on this. You may also have things in place to handle things like encryption, and you may be running Hyperbac or one of the other compression tools.

T-SQL Tuesday – A date dimension table with computed columns

Quite a few people have asked me to blog about what I do for a date dimension table. I’m talking about a table that Analysis Services references for a Time dimension. It’s going to contain every date in a particular range, and be flexible enough to cater for public holidays and other custom details.

There are plenty of options for this, and I’ll mention some of them a bit later. What I use most of the time is an actual table in the Data Warehouse, which I populate with a row for each date in the range I want to consider. This range starts well before the earliest date I could want, and I don’t leave gaps either. Some people like to only use dates that have fact data, but I prefer to have the dates going back as far as I like.

Let’s talk about what the table looks like, and then how it can be created.

I have a primary key on an integer based on the date, in the format YYYYMMDD. So today would have the number 20091208. I haven’t tried using the date type that’s available in SQL Server 2008 for a date dimension yet – I generally try to use numbers for dimension keys, and haven’t tested the alternative yet. Using an integer like this for the key in a date dimension is generally considered best practice.

I also have a column which is the actual date itself. I will use this as the Value column for the dimension key in Analysis Services. I also have various representations of the date in string form, such as “Tuesday December 8th, 2009”, “08/12/2009”, “8-Dec-2009”. One of these will be the Name column, but I may have others available for other properties and translations. A “12/08/2009” option may be preferable for a US translation, for example.

Columns in my table should indicate which year it is, such as 2009. I’ll also throw in the start of the year (in a date format), and something which indicates which Financial Year it is. In Australia, this is most easily handled by adding six months onto the current date and considering the year of this adjusted date (our FY starts on July 1st). I can subtract the six months back again to work out what the start of the Financial Year is. I try to keep things in the code quite simple, as I leave this code with the client and hope they can maintain it as required. The trickiest I get is to use the DATEADD(month,DATEDIFF(month,0,ActualDate),0) technique for truncation, but I think this should be required knowledge when handling dates.

For months, quarters, semesters, weeks, and so on, I will also prefer to have an integer as the key. A Month Key would take the format 200912 for this month, or 201001 for next month. Quarters can be done using 20094 and 20101, and so on.

This may all seem quite complex, but it’s something you only need to do one time.

Let me explain…

My table only really contains one field. Yes, just one. More might be required for custom fields, but where possible, I will just populate one field and let all the rest be handled using computed columns.

Even the primary key will be a computed column.

CREATE TABLE dbo.Dates (
  DateKey AS CONVERT(int, CONVERT(char(8), ActualDate, 112)) PERSISTED NOT NULL
,CalendarYearKey AS YEAR(ActualDate) PERSISTED NOT NULL
,CalendarYearName AS CONVERT(char(4), YEAR(ActualDate)) PERSISTED NOT NULL
,CalendarYearStart AS DATEADD(year,DATEDIFF(year,0,ActualDate),0) PERSISTED NOT NULL
,FinancialYearKey AS YEAR(DATEADD(month,6,ActualDate)) PERSISTED NOT NULL
,FinancialYearName AS CONVERT(char(4),YEAR(DATEADD(month,6,ActualDate))-1) + ‘/’ + RIGHT(CONVERT(char(4),YEAR(DATEADD(month,6,ActualDate))),2) PERSISTED NOT NULL
,FinancialYearStart AS DATEADD(month,-6,DATEADD(year,DATEDIFF(year,0,DATEADD(month,6,ActualDate)),0)) PERSISTED NOT NULL
,MonthKey AS CONVERT(int, CONVERT(char(6),ActualDate,112)) PERSISTED NOT NULL
,MonthName AS CASE MONTH(ActualDate)
                 WHEN 1 then ‘Jan’
                 WHEN 2 then ‘Feb’
                 WHEN 3 then ‘Mar’
                 WHEN 4 then ‘Apr’
                 WHEN 5 then ‘May’
                 WHEN 6 then ‘Jun’
                 WHEN 7 then ‘Jul’
                 WHEN 8 then ‘Aug’
                 WHEN 9 then ‘Sep’
                 WHEN 10 then ‘Oct’
                 WHEN 11 then ‘Nov’
                 WHEN 12 then ‘Dec’
                  + ‘ ‘ + CONVERT(char(4), YEAR(ActualDate)) PERSISTED NOT NULL
,FrenchMonthName AS CASE MONTH(ActualDate)
                 WHEN 1 THEN ‘janv’
                 WHEN 2 THEN ‘févr’
                 WHEN 3 THEN ‘mars’
                 WHEN 4 THEN ‘avr’
                 WHEN 5 THEN ‘mai’
                 WHEN 6 THEN ‘juin’
                 WHEN 7 THEN ‘juil’
                 WHEN 8 THEN ‘août’
                 WHEN 9 THEN ‘sept’
                 WHEN 10 THEN ‘oct’
                 WHEN 11 THEN ‘nov’
                 WHEN 12 THEN ‘déc’
                  + ‘ ‘ + CONVERT(char(4), YEAR(ActualDate)) PERSISTED NOT NULL
–Many more columns following

You will notice that I have used ugly long CASE statements for the MonthName columns. I do the same for the names of the days of the week. The reason is betrayed in the second example. DATENAME (or any kind of conversion that relies upon the language setting, such as CONVERT(char(3),ActualDate,100)) is non-deterministic, and therefore can’t be used in a persisted computed column (I do wish that CONVERT could take a Language setting, so that I could tell it to convert in English, French, etc, and make it deterministic). Why do I want them to persist? Well… I’m just more comfortable with them being persisted. After all, I could use a view for the whole thing at this stage, but I’m really not that comfortable with the table being generated on the fly when it comes to processing. The table is essentially read-only, after all.

As well as many computed columns like this, I will also have some that are not computed, such as a column to indicate if it’s a public holiday. This could be computed, at a push, as public holidays generally follow a system. Even Easter follows a formula that could be applied. But if the company takes a special day, or if government declares an extra day for some reason, then problems can start popping up. I find it convenient to have columns that can be updated directly (but which have defaults, of course).

One great thing about this method is that it can be populated very easily. The only field you insert data into is the ActualDate column. Generating a list of dates is as easy as using DATEADD() with a nums table, as I’ve written many times before, including this StackOverflow question. If you need more dates, just insert more.

As I mentioned before, a view could be used for this. It is very easy to generate a list of dates, and then all the other calculations could be done as other columns in the view. You could perform an OUTER JOIN into a table which lists public holidays and other special days. Analysis Services will happily handle this in much the same way. I just prefer to have it exist as a table, which I feel I have more control over.

This post has been part of T-SQL Tuesday, hosted this month by Adam Machanic. You should be able to see many other posts related to datetime mentioned as Trackbacks to Adam’s post.

Randomising data

I recently needed to randomise some data to keep some information secret. The idea was that it looked roughly similar to the real data, but was sufficiently different to avoid any identifying features.

After discussing it with the client, it was agreed that…

1/ ID numbers would be mixed around the people in the list. Therefore, they were all real numbers (that therefore matched the rules governing what made up a legitimate number), but they would be reordered at random amongst the people.

2/ Dates would be set randomly between the minimum and maximum dates available.

3/ Strings would become a series of random letters, but the same length as the original.

4/ Genders would be assigned a random value of M, F or N (Not Specified).

5/ Numeric fields (such as salaries) would be multiplied by somewhere between 0.1 and 10, with 1 being the median value used.

Here’s how I did it.

1/ I used row_number of this, twice. I used one ordered by the original ID field, and one ordered by newid() (which is a good-enough random order). I could then perform a self-join, and do the update.

with twonums as
select row_number() over (order by id) as orig_rownum,
      row_number() over (order by newid()) as new_rownum,
from dbo.People
update t1 set id =
from twonums t1
      twonums t2
      on t1.orig_rownum = t2.new_rownum

This mechanism takes advantage of the fact that you can update a CTE. The fact that row_number() assigns each number exactly once means that I update every row, and no row gets updated twice.

2/ To generate a random positive value less than some number N, I use abs(checksum(newid())) % N. Apparently this gives a good distribution of values. If N is the number of days between two dates (plus one, in case the two dates are identical), then the result can be added back onto the first date to get a random date between the two.

update dbo.theTable
set theDate = (
         abs(checksum(newid())) %
            datediff(day, min(theDate), max(theDate)) + 1,
   from dbo.theTable

If you prefer, you could populate variables @startDateRange and @endDateRange and then use them instead of having dbo.theTable in the sub-query like this. The Query Optimizer should be able to kick in and make work out those values for you once though (which is did when I checked the Execution Plan).

3/ Without stepping through each character in a string, it doesn’t seem particularly trivial to change each one to something different. For this, I took advantage of SQL 2005’s ability to use expressions with the TOP clause, and the string concatenation feature available from FOR XML PATH(”).

Using any table with sufficient rows in it in my FROM clause, I generated random letters by converting a number from 0 to 25 to a letter. Adding the number to ascii(‘A’) and converting back to a character did the trick. Restricting the number of rows returned to the number of characters in the name gave me a set of characters, which I could easily concatenate using FOR XML PATH(”)

select top (len(isnull(GivenNames,”))) char(abs(checksum(newid())) % 26 + ascii(‘A’))
from sys.all_objects
for xml path(”)

4/ Assigning a random gender to a row was very easy. I simply took a random value between 0 and 2 and used it with CASE.

case abs(checksum(newid())) % 3 when 0 then ‘M’ when 1 then ‘F’ else ‘N’ end

5/ Finally, multiplying by a value between 0.1 and 10. It’s easy to generate a value between 0 and 99, add one to it and divide by 10.0 to get values in this range, but this isn’t really what’s desired, as it would give a distribution centred around five. The distribution that I want is actually logarithmic, giving roughly as many values less than 1 as there are greater.

Really what I wanted was to get a number between –1 and 1, and use 10^N, as 10^(-1) is 0.1, 10^0 is 0, and 10^1 is 10. This seemed quite easy, except that the POWER() function in SQL only uses integers. I could easily generate a value in the range –1 to 1, I simply used checksum(newid()) % 1001 (ignoring the ABS() function), and divided by 1000.0. But then to find 10 to the power of this value, I remember the logarithm function from school, which said that x^y was the same as the exponent of log(x) * y. Therefore, I used:

exp(log(10) * (checksum(newid()) % 1000 / 1000.))

…which did the trick nicely.

The client verified that the data was sufficiently random (as well as expressing some surprised over there being a practical use for log() and exp()), and I had an environment to which I could grant developers access.

Dangers of BEGIN and END

I’ve presented this material at three conferences recently, so it’s about time I wrote a blog post on it…

As programmers, we love modularisation – even in the SQL space. We make stored procedures, views, and functions to encapsulate our code. This improves maintainability, simplifies the development experience, and is generally useful.

But there’s a time when it’s a bad thing for SQL Server.

There’s an amazing component of SQL Server called the Query Optimizer (I always want to write Optimiser, but I’m assuming it’s a proper noun and putting up with the US spelling). When we write queries in T-SQL, it’s the Query Optimizer that works out how to actually run the query. It works out what indexes can be used to improve performance, what order tables (well, indexes and heaps) should be accessed, how to perform the joins, and so on. I find that a rough appreciation of the power of the Query Optimizer can really help query writers.

For example, the Query Optimizer will translate a correlated sub-query in the SELECT clause into a LEFT OUTER JOIN, so that you don’t have to. It will also work out when joins can be rendered pointless and thereby removed from the plan altogether. If you let these principles help you in your query design, you can see significant benefits. It also helps you write queries that are easier to maintain, as there’s little point in trying to be clever by writing a query in a different way if the Query Optimizer will handle it in the same way as before.

If you use a view in another query, the definition of the view is used in the query as if you had written it with a sub-query. A view is simply that – a stored sub-query. They are sometimes referred to as ‘virtual tables’, but I disagree. They are stored sub-queries. Sure, the analogy falls down when you start considering indexed views, but on the whole, a view should be seen as a stored sub-query. The Query Optimizer takes the view definition, applies it in the second query, simplifies it where possible, and works out the best way of executing it. If you’re only interested in a couple of columns out of the view, the Query Optimizer has an opportunity to take that into consideration.

Stored procedures are different. You can’t use a stored procedure in an outer query. The closest you can get to this is to use OPENROWSET to consume the results of a stored procedure in an outer query, but still the whole procedure runs. After all, it’s a procedure. A set of T-SQL commands, not a set of queries. I see the clue to this as the BEGIN and END that stored procedures generally use. I like stored procedures, but I do get frustrated if they’re returning more information than I need, since I have no way of letting the system know that maybe it doesn’t need to do as much work.

Functions are in between, and come in two varieties. A function can be inline, or it can be procedural. I don’t think you find this differentiation in many places – and normally people talk about this particular drawback as being associated with Scalar Functions as compared to Table-Valued Functions, but the problem is actually one of simplification.

An inline function must be a table-valued function at this point in time. It takes the form:

CREATE FUNCTION dbo.fnFunctionName(<paramlist>) RETURNS TABLE AS
( SELECT …. );

It is always this form, with a sub-query enclosed in a RETURN statement. It can return many columns and many rows, but the definition of the table is implied by the SELECT clause. This is essentially a view that can take parameters.

The other form is one that involves BEGIN and END. Scalar functions (unfortunately) require this (but hopefully one day will not).

CREATE FUNCTION dbo.fnFunctionName(<paramlist>) RETURNS int AS
RETURN ( … )

As the RETURN statement is enclosed between a BEGIN and END, it can be preceded by other statements, used in working out what value should be returned.

Table-valued functions can use BEGIN and END, when multiple lines are required to calculate the rows in the table being returned.

CREATE FUNCTION dbo.fnFunctionName(<paramlist>) RETURNS @table TABLE (<fields>) AS


In this kind of function, the table variable is populated with data, and returned to the outer query when the RETURN command is reached.

But when the Query Optimizer comes across a procedural function, it cannot simplify it out and executes the function in a different context.

The execution plan will report that the cost of running the function is zero. But it’s lying. The way to see the impact of the function is to look in SQL Profiler, where you’ll see potentially many calls to the function, as it needs to work out the result for each different set of parameters it’s passed. The pain can be quite great, and you will never have noticed if you just look at the Execution Plans.

The moral of the story is to make sure that your functions are able to be simplified out by the Query Optimizer. Use inline table-valued functions even in place of scalar functions. You can always hook into them using CROSS/OUTER APPLY in your FROM clause, or even use them in your SELECT clause (not “SELECT Claws” – that would make it related to my company LobsterPot Solutions, and “SELECT Claus” is just a bit Christmassy) using a construct like SELECT (SELECT field FROM dbo.fnMyTVF(someParam)) …

Consider the Query Optimizer your friend. Study Execution Plans well to look at how the Query Optimizer is simplifying your query. And stay away from BEGIN and END if possible.

Plane old trouble

Speaking at two SQL conferences in the last two months (SQL Down Under in New South Wales, and SQLBits V in Old South Wales), I’ve had some flights to do. This isn’t normally a big deal, but both times I managed to have some stress getting home.

Firstly, I should point out that both conferences were really good. Very different to each other – SQL Down Under was held at a university campus in a country town, SQLBits was in a 5-star hotel with conference centre – but both great events. There’s something about having a conference with a dedicated technology that makes it special. At TechEd you brush shoulders with people who have very different areas of expertise, but at a dedicated SQL conference, you end up having a lot in common with just about everyone.

At SQL Down Under I got to catch up with many people from around Australia that I see only a couple of times a year. Friends that I know from previous trips to Wagga, or from user groups I’ve visited, TechEd, even the occasional class I’ve taught. The content is always good, and it’s great to see people honing their skills in presenting. This year one of the highlights was seeing John Walker present for the first time.

At SQLBits, I got to meet many people for the first time (first time I’ve done a SQL conference in the UK). I got to see old friends like Simon, Jamie, Tony & Darren again, and meet people like Chris, Chris, Allan, James & Martin (of course there are many more names I could list). I had never heard any of these guys present before, so I tried to get around to as many sessions as I could. I was disappointed that the sessions I was giving clashed with Brent’s, but I was pleased that I could meet him for the first time.

Coming home from Wagga, I had to meet a flight taking me from Melbourne to Adelaide. I had allowed plenty of time to make the transfer, but when the flight out of Wagga was well over an hour and a half late, I knew I couldn’t make it. There was a fair crowd of SQL people at the airport, so we were joking about different tactics that could be used to help me make the connection. The flights were with different carriers, so apart from letting me check in for theSDC14510 second flight on their computer, there was nothing the Wagga staff could help with (they were very nice and helpful though, let me use their printer and everything). When I got to Melbourne, it turned out that the flight I was booked on had been cancelled, and my ticket transferred to a later flight, which I managed to catch. Home later than expected, but crisis avoided somehow…

Not so lucky on the way home from the UK. My flight to Australia stopped at Bangkok on the way, and as I got off, the crew were saying that we had an hour and a half. I got back to the gate in about an hour-fifteen, only to be told that I was too late. Apparently the 90 minutes was from the wheels touching down to the wheels taking off again, and we only had about 30 minutes in which to get back to the gate (bearing in mind that at Bangkok airport you need to wander down from the gate to a security area, and get re-admitted to the Departure area, before returning back to the same place you got off the plane in the first place). 24 hours later I got on a flight to Australia, but not before a stressful night trying to work out how best to get a replacement ticket, considering that nowhere in Bangkok was open for the first 16 hours I was accidentally in Thailand.

It hasn’t put me off the idea of travelling to conferences. Everything that happens gives me a story to tell, and I guess these last couple of months have just given me more stories than I expected. If you’re into SQL, and there’s a SQL conference near you, you should really try to get to it. Just pray that you have a better time getting home than I did.

StreamInsight talk coming up at SQLBits

My talk on StreamInsight is up next. I’ll try to blog more about that later. For now, I want to mention more about SQLBits itself. This is by far the largest SQL-only conference I’ve attended (I haven’t been to SQL-PASS yet), and it’s great to be involved.

Yesterday I had an all-day seminar about the new items for Developers in SQL 2008. It was a good time – the delegates responded very positively, and many of them have caught up with me since.

But for me, the conference is being a great way of catching up with (and meeting for the first time) a bunch of SQL people that I rarely see. I’ve met people that lived only a few miles from where I grew up, and people that read my blog (Hi!), discovered people who have connections to Adelaide, and even found that my Adelaide friend Martin Cairney (who is also here) has a strange connection to Donald Farmer (of Microsoft), that their parents shared a back fence or something… Now Trevor Dwyer tells me a colleague of his knows me from somewhere… the world is very small here.

My StreamInsight talk will be interesting I hope. I have some stuff to show off, and I plan to involve the audience a little as well. If you’re at SQLBits and feel like being involved in an interactive session, then definitely come along. I want to hear from people in the audience who have dabbled with StreamInsight and also other vendors’ Complex Event Processing offerings. This is a brand new technology from Microsoft, and there will be a large range of adoption levels in the room.

SQLBits V, in Old South Wales

I recently gave a talk in New South Wales, so now I’m going to give one in Old South Wales. In Newport, to be precise.

As I’ve written before, I’ve been a big fan of the SQLBits conferences that is run by many UK-based friends of mine. Unfortunately for them, they had a presenter pull out recently, and unfortunately for them, I’m going to fill in.

Weather-wise, it’ll be a nice change from the scorching weather we’ve had in Adelaide recently. We’re setting new records for days over 30C here, the streak which will be broken on Monday if the temperature drops to 28C, before climbing again after that. I’ll be going to temperatures which are more like 40F than 40C. I think I’ll be the one wearing two jumpers and a coat.

I’ll be involved in all three days of the conference, doing a full day of SQL 2008 for Developers on the Thursday, and hour-long sessions on the next two, on the topics of StreamInsight and Query Simplification respectively.

It’s a great opportunity to be involved, and I’m sure it’ll be a good time. There are several tracks, and the quality is bound to be high. I’m planning to attend sessions by friends, lots of people I’ve never heard present before.

If you’re going to be in the UK on Nov 19-21, make sure you get along, and say hi! I’ll also be receiving delivery of my (signed) copies of the SQL Server MVP Deep Dives book, which is going to be good too.

Finding the Microsoft File Transfer Manager

This is really just a reminder blog post for me. Way too often I find that I have closed the Microsoft File Transfer Manager for one reason or another, and I want to start it up again to resume some download from my MSDN Subscription. Like today, I need to grab the latest version of SQL Server 2008 R2, which includes built-in Split, RegEx and Fuzzy matching features for T-SQL (something I’ve wanted for a long time, and that I’ll blog more about later, once I’ve had a chance to try it out). It’s a large download, and not something I want going when I’m on a 3G connection, but happy to use a WiFi connection for. And there’s just the odd time when I’ve forgotten I’m downloading something and it’s sitting there, partly downloaded…

So I find myself looking for the application to run. When it installs by default because you click on a download, it doesn’t put a shortcut in the Start Menu anywhere. So I’ve been known to even go and restart a download (just telling it to Cancel instead of agreeing to it), just to start the process. Then I can jump into PowerShell and run this command:

get-process -name transfermgr | select path

…which tells me that the path of the transfermgr process is at c:\windows\downloaded program files\TransferMgr.exe

So now I can just run it from there. The idea of this post is to remind me where it is, so that I don’t have to hunt for it every time.

Just another Microsoft MVPs site