Category Archives: 2699

Infinite Drill-through in a single SSRS report

Grant Paisley of Angry Koala and Report Surfer put me onto this a while back, and I have to admit I’m a bit of a fan. The idea comes from the fact the way that SQL Server Reporting Services (both 2005 and 2008) handles parameters with Analysis Services, and lets you make a report that drills through into itself, deeper and deeper into a hierarchy. Today I did a talk at the Adelaide SQL Server User Group, and mentioned this was possible (but didn’t have the time to demonstrate it properly).

If you make a parameterized query in an MDX query in SSRS, you use the STRTOMEMBER or STRTOSET function to handle this. But the MDX has no other indication of what dimension, hierarchy or level is being passed in. If you grab the children of whatever you’ve passed in, you can easily put this on the Rows axis and get one level down. Passing the UniqueName of whatever you’ve just provided back in as the next parameter, and you have infinite drill-through.

Look at the following MDX query:

WITH
MEMBER [Measures].[NextLevel] as StrToMember(@SomeDate).Hierarchy.CurrentMember.UniqueName
MEMBER [Measures].[NextLevel Name] as StrToMember(@SomeDate).Hierarchy.CurrentMember.Member_Name

SELECT
NON EMPTY { [Measures].[Internet Sales Amount], [Measures.NextLevel], [Measures].[NextLevel Name] } ON COLUMNS,
NON EMPTY { (StrToMember(@SomeDate).Children ) } ON ROWS

FROM [Adventure Works]

You see that I provide the UniqueName and Member_Name properties in known columns, so that I can easily reference them in my report. You’ll also notice that nowhere do I actually indicate which dimension I’m planning to drill down on, or to which hierarchy the @SomeDate parameter refers. I have suggested it’s a date, but only in name. At this point I also make sure that the Report Parameter is not restricted to values from a particular query, and I hide it from the user. I’m going to be passing in UniqueName values, which aren’t particular user-friendly.

If I start with [Date].[Fiscal].[Fiscal Year].&[2003], then my NextLevels will be [Date].[Fiscal].[Fiscal Semester].&[2003]&[1] and [Date].[Fiscal].[Fiscal Semester].&[2003]&[2]. This then continues down as far as I want it to go. I could always put a condition on my Action to pick up when there are no more levels, and potentially start down a different hierarchy. After all, I can always use a bunch of other parameters in the WHERE clause to slice the cube in other ways first, for placeholders. It really just comes down to MDX creativity to investigate different ways of drilling through the data.

Please bear in mind that other people may well have achieved the same sort of thing using a different query – I’m just posting what has worked for me. Hopefully by doing this, you can avoid making five drill-through reports just because your hierarchy has five levels. This might just remove 80% of your reporting effort!

Access’ DISTINCTROW keyword explained – it means WHERE EXISTS

Steve Koop spoke recently at the Adelaide SQL Server User Group, talking about things which don’t convert particularly nicely when upsizing from Microsoft Access to SQL Server 2008. I think this is a really important thing for SQL people to know, as there seem to be many Access databases living in even the largest organisations.

One of the things he mentioned was DISTINCTROW. I’ve never really known what DISTINCTROW does, so I asked him. He sent me a link which explained the difference between DISTINCTROW and DISTINCT, and it described as “DISTINCTROW works on records, not just individual fields”. This might be good for some people, but I wanted to know a little more.

http://msdn.microsoft.com/en-us/library/aa140015.aspx says “The DISTINCTROW keyword is similar to the DISTINCT keyword except that it is based on entire rows, not just individual fields.” – but it also goes on to say a little more, confirming my suspicions. “It is useful only when based on multiple tables, and only when you select fields from some, but not all, of the tables.”

So DISTINCTROW is more about the JOIN type than anything else. It only applies if you are querying multiple tables, but not returning fields from all of them. So it’s a SEMI JOIN to the unused tables, which you write SQL Server using a WHERE EXISTS clause. It’s not really like DISTINCT at all – it’s about doing a Join without seeing the ‘duplication’ effect, clearly only feasible if you’re not returning columns from the other table.

If you’re not sure what a Semi Join is, then just think about the WHERE EXISTS clause, and it should become clear. If you look at the execution plan of a query in SQL Server that uses WHERE EXISTS, you’ll see that it’s doing a Semi Join. And if you’re looking at queries which use DISTINCTROW, consider changing them to WHERE EXISTS instead.

A review – Microsoft Windows PowerShell Step By Step (Ed Wilson)

Another book review, and another giveaway for the Adelaide SQL Server User Group. This time, it’s Microsoft Windows PowerShell Step By Step.

Last month I had picked up the Windows PowerShell Scripting Guide, wondering if it was going to be a good recommendation for people who were interesting in getting into PowerShell. Even though I thought the book was very good (particularly if you want to use PowerShell to access the innards of a Windows installation), it didn’t seem like the right book for recommending for PowerShell beginners.

This book is though, and I’ll definitely recommend it for people wanting to get into PowerShell.

It’s worth pointing out that it’s a thinner (and cheaper) book than the other one. It certainly doesn’t cover how to perform the variety of Windows Admin commands that the Scripting Guide did. But what it replaces that with is a guide on getting the most out of PowerShell. PowerShell is used in so many different products now, it’s no longer just the domain of Windows Administrators. Developers can use PowerShell for unit tests. DBAs can use PowerShell to perform routine maintenance. Just about everyone in IT could use PowerShell to make their job easier. My background is in development, not system administration, so I’m always keen to write code to automate tasks. I was never that keen on VBScript, but PowerShell gives me a much richer environment while also being much closer to the system itself. I can hook into subsystems of Windows and .Net objects easily, and into environment variables, certificates and more, piping the results into other functions and utilities to extend the scripts as much as my imagination can provide. I’m always happy to recommend PowerShell as an important skill for the future.

And this book can people get introduced to PowerShell, walking them (step by step, just like the title suggests) into the depths of PowerShell – leveraging functions and providers, and a good introduction to using PowerShell with WMI and Exchange. I’ve enjoyed reading it, and plan to re-read it a few times over the next months, looking for those techniques that I’m not using (yet) but that I’d like become more familiar with. PowerShell reminds me of my early days using the vi editor (which I still use as my preferred text editor in Windows). We were forced to use vi at university, and the learning curve felt very steep. It seemed as if I learned some new (and better) way of doing something every day, to the extent that these days I still find it by far the quickest way to edit many types of text files. This book feels like those fellow students of mine, pointing out features I didn’t know existed even after I felt proficient (although I’m pleased to mention, not as many as I thought there might have been).

The book also has a CD full of examples that I need to find time to go through (and tweak, and practise, and learn). It includes a bunch of utilities, and an electronic copy of the book as well. Having said that, the book isn’t too big to carry with me for a while, and I’m sure will be a regular read for those “no electronic device” periods of flights.

Running a user-group meeting on a hot day

It’s 44C here today in Adelaide. Yesterday made it to 43.2C, and my lunchtime session of the Adelaide SQL Server User Group had its lowest attendance for a long time. There were a few influencing factors which I will need to learn from:

1. The heat. When the forecast says 41C and you’re hoping that people will leave their air-conditioned offices to come to a meeting (albeit in another air-conditioned office), you’re probably hoping for a miracle. Mind you – I’d happily be there today. I’m waiting for a plumber before I can turn my the water back on after a shower tap died last night. Once the water’s back on, the aircon can come back on. Next time, I’ll have to buy ice-cream for everyone.

2. The day of the week. We have recently moved our meetings from the second Thursday to the fourth Tuesday, for reasons beyond our control (venue hassles which are now sorted). So people will still be adjusting to that. Some people may have found that Tuesdays just don’t work for them, other people will have dismissed the group a few years ago if Thursdays didn’t work for them. Changing the days just doesn’t work, and the sooner people get used to the new day, the better.

3. The public holiday. No, I didn’t run the group on a public holiday – that would be crazy. But Australia Day was on the day before, which effectively made the user-group run on the acting-Monday. Lots of people will have been playing catch-up from the weekend.

4. First day of the school term. Most schools in South Australia started the new year yesterday. My boys didn’t – they started today. In fact, Samuel came along to the UG (again – he’s been to three or four meetings now), and even helped in a quick demonstration of Windows 7.

5. The presenter. I presented. I talked about PowerShell and SQLPS – showing the types of things that seem appropriate uses, and the types of things that don’t really. I doubt that my presenting would’ve worked against the attendance much, but there is a degree to which people hear me speak and give the odd tip or two every month, and so wouldn’t’ve been quite as keen as if someone were coming from interstate.

6. The short notice. I didn’t end up advertising this meeting until very late in the piece. January wasn’t that good a month for me – my back had been playing up (even spending half a night in Sydney Hospital when I was there), and I’d even spent a bit of time on Valium (which helps my back, but makes me fall asleep like an old person). I guess I’d been a bit distracted, but that’s no excuse for not getting things done.

I’m sure that there were things I could’ve done to help my January meeting. Without shifting the day to something irregular, there was nothing I could do about the public holiday or school going back. I also think having me present was probably the right thing to do – I wouldn’t want an interstate visitor to have the smaller January crowd. But yes, I should’ve adverised it sooner, and organised ice-cream. I honestly think that if I had’ve kept my eye on the weather forecast, and bought ice-cream for everyone, then attendance would’ve been better.

I’ve been to less attended groups before – but I’ve got used to having a larger crowd at the Adelaide SQL Server User Group than yesterday. Next year Australia Day falls on the 4th Tuesday, so that meeting will have to be moved. Oh well, can’t win them all…

OUTPUT clause – knowing what goes in, and what you’ve accidentally taken out

The OUTPUT clause has to be one of the best T-SQL features out there. It was new in SQL Server 2005, but it’s still remarkably little known. I guess like many of the features that were introduced in SQL 2005, many people just make do with the way they did things before.

The basic point of the OUTPUT clause is to turn a data modification statement into one that returns data (although this data can be redirected to an existing table by simply adding “INTO tablename” after it), by providing a way of accessing the inserted and deleted tables that we have been using in triggers for years. If a DELETE statement is being issued, only the deleted table is available, and similarly inserted for an INSERT statement. UPDATE provides both – and the MERGE statement (new in SQL Server 2008) populates both (although it feels as if a FULL JOIN has been applied to them).

I’m sure it’s easy to imagine the tremendous advantage to using it with UPDATE statements – auditing. It’s very easy to push the changes to an audit table, without the need to create a trigger to do it for us. But I find that the biggest advantage is for those times when carelessness has got the better of us. Hopefully these times are rare (and the more experienced we get with databases the more we seem to respect the sanctity of the data), but if you always use “OUTPUT deleted.*” when deleting, or “OUTPUT deleted.*, inserted.*”, then the odd time when you see more rows come back that you expected, you can have a plan associated with your “Oops” moment.

If you don’t use the OUTPUT clause, you get a message that tells you how many rows were affected by your query. If you expect that number to be small, and it’s actually quite large, you’ve probably done something wrong. “Oops” is probably an understatement. You may have started a transaction and be able to roll it back, but until such time as you get to that, you have locks which aren’t being released. Regardless of whether or not you can roll it back, having something which shows you what you’ve just done can really help you out. You can copy the data presented into Excel, or Notepad, or whatever, and work out your problem. If you can’t roll it back, then this may involve some sort of import process being quickly thrown together.

The benefit is far less obvious when inserting data – but in some ways, it’s actually even more useful.

We’ve had the @@IDENTITY and SCOPE_IDENTITY() functions available for some time, and they’re widely used. But if multiple records are inserted, or if the targetted table doesn’t contain an identity field, then they’re actually not quite so great. Knowing which record is which is a question of re-querying the data and hoping you can tell. But if you “OUTPUT inserted.*”, your application can immediately tell which row was inserted with which surrogate key (it’s less of an issue if you use a natural primary key of course). With a large number of systems opting to use guids for PKs, defaulting to a value of newid(), it really helps to have an OUTPUT clause so that the guid doesn’t have to be generating prior to performing the insert.

The biggest caveat with the OUTPUT clause is that it can be ruined by triggers. A trigger being called can prevent the inserted and deleted tables from being available at the end of the statement. I’d like to have a way of stating that the OUTPUT clause should still work, returning the state of the inserted and deleted tables prior to any triggers being called, but I suppose I can understand the logic behind the decision to disallow it.

This was one of the tips in my TechEd Australia presentation, which was re-delivered to user groups in Adelaide and Melbourne. The scripts and slides are available for download. This was one of the more popular tips, based on feedback.

Orr-some time with Geoff

Geoff Orr spoke at the Adelaide SQL Server User Group yesterday, and according to the feedback (and attendance), it was a very popular session. He discussed various options around SSIS configuration, and threw in some jokes as usual.

Of course, he may need to update the talk soon, as it seems the next CTP of SQL 2008 will have many new SSIS features. Keep your eye on connect.microsoft.com/sql for when it comes out. I’ll certainly be checking them out, and making sure I highlight them when I run the SSIS courses (and this) next.

SQL and Virtual Earth

A while back I was talking to Bronwen Zande, of Brisbane-based SoulSolutions and GeekGirlBlogs.

Anyway, she and her partner John are big fans of Virtual Earth. As it’s well publicised that SQL Server 2008 will have new spatial types, including fancy ways of integrating with Virtual Earth, I asked her if they were planning a tour of the Australian SQL Server User Groups to demonstrate how this could work. I figure this is something that few SQL Server people will do much investigation into, but may well be asked about by developers who are keen to find out what’s possible.

Lo and behold, I guess things are happening, because they’re on their way! In Adelaide, this meeting will be on January 10th, our usual time-slot on the second Thursday of the month, and there are other events happening around the country too. All this despite the fact that the latest publicly available version (July still) of SQL Server 2008 doesn’t have the spatial types in it. Hopefully by then something will be available for us to check out the demos. Keep your eye on https://connect.microsoft.com/sql for news on that.

User Group meeting with Christine Bishop

Yesterday we had Christine Bishop come to Adelaide. She’s the Product Marketing Manager for SQL Server and BI for Microsoft Australia. That makes her handy to know if you’re in the SQL space, and we had a nice crowd come along to the user group to meet her.

One thing that made this meeting different to the usual is that Christine isn’t overly technical. She’s not bad, but compared to the people in the crowd listening to her, she would’ve been in the less technical half. Normally we’d have some in depth technical session, showing people how to do particular things, but this time we looked more at the business side of things.

Feedback was good, and for those people looking for something a little more technical I demonstrated a quick point about SSIS and one about the use of row_number() to make sure that only the three most recent records for each category were stored in a table. I might post about these more another time.

My favourite thing about SQL Server 2008

A few people have asked me what my favourite thing is in SQL Server 2008 (“Katmai”). But my favourite thing isn’t actually a feature at all, it’s a mindset that Microsoft are taking with it.

This mindset is “We will only put features into the CTPs once they are basically complete – including the documentation.” And this makes me say “Wow!”.

One of the problems with all beta software is the stuff that’s in there that just isn’t finished yet. You try to use some feature, and it doesn’t work. Or worse, something dies because you tried it. It’s these scenarios that stop people trying out betas, and seeing people using the previous version still nearly two years after release (It’s now over 21 months since SQL 2005 was released).

Not the case with SQL Server 2008 though. Functionality may be limited, but hopefully new features should be both stable and documented when they appear. So now because this is the case, people should be able to port their systems over before release, confident that features won’t be changing significantly between that time and RTM.

This week at my user-group I’m giving a talk on MERGE & TVPs. Two essentially different topics, but ones that compliment each other nicely. I think people will be leaving this meeting picturing places in their code where they want to refactor it to take advantage of these new features.

Thoughts on TechEd

I enjoy TechEd. It’s always good fun. Last week was my fourth TechEd, having attended for the past three years, plus in ’99. The last three years have seen my involvement increase each time. Last year I proctored with the Hands-On Labs, and this year I presented both a session and an Instructor-Led Lab. I’m a big believer in giving back to the community, and I see this as part of it.

I do find that the more I’m involved, the fewer sessions I actually get to attend. But I don’t feel like I’m missing out, because it’s not why I go to TechEd. I go to meet people, to help people with questions, see old friends, and to be a part of the big event. In ’99, I went because my employer sent me. I went to learn things, and I did. In the last few years though, my approach has been quite different, and I think I’ve got more from the event. This year, as a speaker, I was able to mix with a slightly different crowd, but also had an entirely different Ask The Experts experience to previous years.

There’s something about community – the more you give to it, the more you feel a part of it, and the more you want to give. If you’re reading this, you should get involved in your local user-groups (or even think about starting one, like the Hobart-based Datawise guys).