Input, Output and Actions – Back to basics

I was recently going through my RSS feeds and found an old article that I’d apparently earmarked for later reading. That seemed to start a trend and I kept going further and further back – several years back actually.

And that got me thinking….

What is the one trend that we (broad brush term) keep doing in this industry? We look forward…Constantly looking at what’s coming…what’s next…where do we go from here…where will “this” evolve to. We look at the gadgets, devices, frameworks, tools and services to innovate – because that’s where we are expected to look and then we moan when we don’t understand the basics – when that developer we just hired doesn’t get fundamental program construction or when we make mistakes ourselves.

We are quite often afraid to look at what we left behind – mostly because that’d make us be aware and acknowledge the mistakes we’ve made.

One of the bloggers out there that I quite often enjoy reading (i’m a consumer off his blog, not a participant) is Alberto Gutierrez and he’s the author of the blog that I’d bookmarked an entry from.

His blog entry “Forget about requirements, Software Development is all about inptus, outputs and actions” was what jump started my brain into gear and got me thinking about what I’m often doing wrong.

  • Labelling – i’m fanatical at labelling tasks

This helps me organise my work into segments that I can manage – or so I thought..Alberto’s blog entry must have sparked something that made me bookmark it back in 2009. And here it was. My labelling actually isn’t helping me. It’s making it unnecessarily complex. The fact of the matter is that it’s a task – one which has to be done – one which contributes to an overall goal. So, as Alberto puts it: It’s an input.

The result of that input is output – yeah, once the task has been completed of course.

I then set out to make this change to how I was managing things and the idea was to see if this would speed up what I set out to complete. Over a couple of weeks I did notice that the simplification of what I was doing was helping me spend less time organising and more time achieving.

I scrapped all my previous tasks lists and just made a single list. I use “Remember the milk” to manage my inputs – use whichever you want to use, the important thing really is that it’s available to you when you need it.

Full circle – looking back (or going back to basics) can quite literally be more valuable than to look for the future.

As Confucius said “Study the past if you would define the future”. It rang true with me and I’ve learned a valuable lesson. Making mistakes isn’t bad – as long as you recognise it, you can avoid making it again and that goes for all aspects of life really – not just for the professional side, but also in the personal.



Upcoming event – Top 10 Features developers love with Jeremy Likness


In less than 24hrs we’ll have our first session focused on Windows 8 and we probably couldn’t be in better hands than with Jeremy Likness.

Jeremy has presented for LIDNUG before and it was one of those sessions where you seriously didn’t want to leave your desk for any reason, just so you didn’t miss anything. Tomorrow’s event promises to be even better!

What i think is cool is that Jeremy will being doing all the demos from a slate, running Windows 8 of course. We did a dry-run today and it worked a treat. Pretty gutsy of Jeremy tho…

So, what are you waiting for? Register here:
Wednesday, June 20, 2012 from 1:00 PM to 2:30 PM (ET).

As if the session isn’t enough, here’s another piece of information. Wintellect is giving away a $499 virtual training course.

Partner offer from Wintellect: Attend and you will be entered to win a Wintellect virtual training course ( $499 value). The winner will be announced during the webinar. *You must be present to win.

Partner offer from Syncfusion: Get more than 600 Metro-style icons for FREE! Download Syncfusion Metro Studio—a collection of Metro-style icon templates that can be easily customized using an intuitive customization tool to create thousands of unique icons. Download now! ( )

Very much looking forward to this event as I attended a full day of geeky goodness, here in Australia, this weekend, on exactly this topic – well, Windows 8 development anyways.

LIDNUG: The Art of Debugging with Mario Hewardt

We had a great session last night (couldn’t attend due to the storm that hit us..Cat 3 apparently – we all blame my fellow colleague MA for bringing the british weather to Perth).

Anyways, enough of blaming poor MA, what we have here is a brilliant session that’s probably going to surprise some people.


All of our events are recorded and uploaded to our YouTube channel

Check out the upcoming list of events from LIDNUG – bound to be something to interest you.

Stick to your guns – why lifecycle management is important in the enterprise

The concept of Application Lifecycle Management is not a new invention that just popped out last year and hit us in our backsides. It’s a very wide topic with a lot of categories, which includes categories such as:

  • Project Management
  • Change Management
  • Release Management
  • Design, Modelling and Issue Management

Yes, that’s right guys and girls (honestly!!) – Project Management is indeed part of the Application Lifecycle (for real this time!!).

This is spread over an even larger list of methodologies and tools, some fluid/agile and some integrated…

We often hear that IT projects are more likely to run over budget, be delivered late and in a state of looking like a bug-ridden open source project…. statistically that’s quite often the case – so why do we bother when we’re doomed for failure? Well, because when things are done right, it just works.

One of the biggest mistakes I frequently notice on IT projects is the failure to understand that it’s not just the development and project management team which has to adhere to a process. (Yes, i know, those that pay the bills just want things done now…you arguing?…s’ok, there’s the door). IT projects are like a house of cards, the more you add and the more complex the “build” becomes, the more careful you have to be – so that means following that process even if it means you have to get approval before proceeding.

I was once assigned to a project to resolve a few bugs here and there. Nothing major, this was final stage of the project, just a few helpful hours when the cycles where free. The first issue I was assigned wasn’t that complex, but getting to grips with the code base and project layout did take a bit of time. It wasn’t really documented and there was nobody around to give a hand. Anyways, after about 1 1/2 of trying to reproduce the alledged bug, I finally gave up and tracked down the BA who’d lodged the bug to start with. For the first 30secs the BA had to get their head around which particular bug I was referring to – all understandable, the project had run for years.

Then it dawned on the BA which issue i was referring to…the message was…

That issue was fixed 1-2 days before it was assigned to you…

Ok,  naturally I wasn’t entirely impressed, so I chased down the PM that’d assigned the issue to me and got the usual blabber about “being newly assigned onto the project” and “we’re all in this together, please stop strangling me…garggle..ugghh..gasp”.

So the incident at least had a release for my homicidal side..

Why did the Project Manager assign the issue to me in the first place? Well, he’d been running around, trying to get resources stitched together to complete the project and get it out of his hair. Yes, he was a newly assigned PM and the developers on board were largely juniors. Obviously a failure to communicate on one level or another….or….something a bit more sinister?

The steps pretty much showed a full breakdown of the lifecycle.

Test conducted -> Issue Found -> Issue Raised -> Issue Triaged -> Issue Fixed….

And then that’s where things really broke down. No detail was found in TFS that the issue had been fixed. It was still assigned to the PM. Still in active status.

Obviously the PM was never informed that the issue had been fixed..who’s to blame for that? The developer? the tester? the PM?

It’s a very basic example of what can go wrong when the process isn’t followed by everybody involved. That includes developers, testers, project managers, stakeholders etc.

Considering how simple this was, the cost was huge and the ROI so small as to be negligent. Look at that from a project perspective, a drop in the ocean, however if that happened often enough the project would fail (unless we had some seriously generous clients and budgets).

Look at this from an enterprise perspective where cost of delays, additional licensing and hardware, consulting resources et al, then it’s obvious that the lifecycle is paramount. Had a process been in place above (well it was, but it obviously wasn’t followed) the waste of time (and additional cost to the project) could have been avoided.

Shortly after that incident I received an email, asking me to make a small change (just a date calculation, v.simple and quick fix) to another feature. I naturally understood that the work item had been assigned to me so I could see all the details of the change (and ensure nothing had been missed in the email), but alas, it hadn’t. Due to timezone differences I needed to wait for the PM to get back on board for the day, to ensure that the details I’d been given in the email was all encompassing. The work item change had been raised as an issue to be resolved – and lo and behold – it also hadn’t been assigned to me. Great, now due to check-in policies in TFS, I wouldn’t be able to check in before the change to go to the test team. Again, and added piece of delay that ends up costing money in the long run. Of course, I could go and make the change to the code base, wait for the issue to be assigned to me, check in and then move on. But, here’s another idea.

How about I DON’T do the work before the “paperwork” is all sorted?

Yes, I waited patiently for the PM to respond to my email – “S’cuse me, sir..please assign issue xx to me and verify all details…kthxbai” – and then I proceeded.

Did it take a bit of extra time to get this done? Of course it did, but what could the consequences have been had I not gone down that path? Again, wasted time and even possibly impacting code and changing features which wasn’t meant to be changed. Again, more wasted money/time/resources.

This is just a small example of why being very careful to follow a set lifecycle is paramount to IT projects – in the whole scheme of things, this wasn’t something major really – however, the project had already rolled way over budget and deadline, why continue to waste resources when a simple process could have prevented it all.

Measure twice, cut once – The law of professional development

Over the years, I’ve been fortunate to work with some seriously talented people – people who’s put my own meager skills to the test on a daily basis. It’s been a mix of people – some with strong academic backgrounds (BsC, Masters, Phd, et al) and some with a born knack for development. It has at times been a humbling experience, and in many ways it continues to be, that’s taught me that there’s always somebody out there much better, faster, smarter and stronger than me.

I’ve also been lucky to have people to lean on, for support (not just technical) and guidance, and some of the ways I try to work is reflected in that. I’ll be the first to admit that I don’t do everything absolutely right. I’m human and therefore expected to fail. We’re genetically wired to fail, it’s as simple as that.

One of my long-time mentors once gave me a task to complete (it’s irrelevant what the task in itself was), with no timeline, except I knew that this particular task would only benefit a minute fraction of people that I knew. I started working to complete the task, quickly made headway and was nearly finished when I was stopped to answer some questions. In this case the questions was “Why did you choose to do X, when Y was naturally a much better approach?”. Now method “X” was about 5 times faster to complete than method “Y” – it wasn’t a real kosher way of doing it, but usage was less than a handful of people, I “cut the corner” so to speak and took the risk of failure on my own shoulders. Simple math really.

Risk =  Probability * Damage Potential

So I took the chance based on that and figured that it’s only going to be used by a couple of people – no reason to go all out and design the Eiffel Tower all over again. Naturally as with many things that happens in life, I didn’t quite see the point of the exercise, except I knew that there certainly was one. After I answered the question of my choice of method (“X” over “Y”) I complete the rest of the task.

A couple of things occured to me as I was looking at the final output. Well, it did the trick….buuuuut….I wasn’t necessarily proud of the way it’d been done. First I figured the doubt came from not being 100% sure of what the point of the questions and exercise was, but then it dawned on me. I had used a shoddy approach – yes, it did indeed do the trick and got the job done, but there was very little satisfaction in completing the task. Personally I wasn’t entirely happy with it. It took me a few days to get it all sorted out..the “why” factor had hit me.

If it is worth doing, its worth doing right

It didn’t really matter how many people would be impacted by it – what mattered was the way I’d chosen to go about doing it. From a professional perspective, what did it say about me – I was willing to cut corners when nothing prevented me from doing it the right way. I had no time, budget or complexity constraints. It was a lazy attitude and this was exactly what the point of the exercise was.

Back to present day – I came across a question about why bother use proper techniques in a development shop, seeing as it was a small 1-2 man shop only. Was using MVC, UML, OOD etc overkill for an internal application? Now, very little risk would be associated with the “probability” of failure by the internal application. Impact was tiny and each of those impacted could easily fix any issues that would pop up. It was a supportive application and obviously not business critical (otherwise the question would most likely not have been asked). Many other answers were provided – such as what OOD was and why it was technically implemented. How it helped manage complexity etc etc.

I read through most of the answers, but noticed nobody was looking at the question from another point of view – from the perspective of “What does it say about me, professionally?”. It occured to me here that my lesson learned way back when, was applicable to many other areas and it founded on how I do things.

Do it right, not because its an option not to, but because what you do, and how you do it, is what shapes you as a professional

Does cutting corners, when there’s no reason not to do it right, not signal that something is wrong? Personally I believe so.


An April Fool’s Day MVP – MVP 2012

The 1st of April has been hillarious for me, ever since I got my first Microsoft MVP Award in 2006. Not solely due to the fact that I’ve been honoured by Microsoft by being presented with the MVP Award, but also because quite a few of my friends finds it funny to send me all types of “spoofed” emails on the day.

This year was of course no different – lots of emails (of various dubious qualities), but at the end of the day I did indeed received one particular email that’s bound to bring a smile to my face.

Yeps, I’ve been awarded the 2012 Microsoft MVP Award and I’m very pleased with it. The award is a recognition from Microsoft and irrespectively of why we do what we do for the community, it is indeed very nice to get the recognition. When you take a look at the company and quality of the MVPs out there, I feel truly humbled. Some of the industry’s biggest names are MVPs but it’s like comparing apples and flux capacitors.

So here’s to another great year of more community involvements – but first a big thank you to all of those that’s bothered to listen to me waffle on for hours on end – lets make 2012 even better, and bigger, than 2011.

 Anyways, you can find me floating around in a few places:

 Feel free to ping me if you got any questions..

oh, btw, the category is ASP.Net 🙂

A choice – classroom vs. virtual technical training

Over the last couple of weeks i’ve been working on a business case for our centre to adopt a virtual technical subscription as opposed to our classical choice of “classroom” technical training.

It seems more and more clear that the ROI of classroom training is of an exceedingly low value if you look at it strategically.

What i’ve deducted so far is:

  • Classroom training provides a “feel-good” positive feedback from staff attending it. They know it’s costly and feel more valued by the company.
  • Classroom training provides a hands-on experience in many cases
  • Classroom training does not provide a large variety of topics (usually one topic per training session/week at a high cost)
  • Classroom training does not provide an on-demand availability as schedules are set by training providers
  • Classroom training does not provide a high knowledge retention rate

These were some of the main points that’s making the grounds for my business case.

When turning that around and looking at a virtual technical subscription (such as and there’s subtle areas of difference.

  • Virtual training doesn’t necessarily provide as “feel-good” positive feedback from staff utilising it. The individual value associated with it is generally less than Classroom training. This is a percieved monetary value.
  • Virtual training provides a hands-on experience in many cases if you mix your technical subscription offerings
  • Virtual training provides an on-demand availability service. It’s there when and wherever you need it
  • Virtual training provides a large variety of topics and can be both specific or general in depth
  • Virtual training provides a high knowledge retention rate as you would consume content which is needed here and now

From the generalisation (which i’ve had to make) it’s clear that both cost and knowledge is of high importance, leaving only a business decision on 1 or 2 points to be made.

I was able to run several trials with different staff over a short period of time and deducted that on average 6hrs was spent weekly (some lower and some much higher) when the on-demand content was available. The most important feedback I got was that there was a sense of “i can train when and where i feel like it” and “i can re-visit content i’m not sure about anytime”. These two points were of immense value to me as they clearly indicated that the benefits of classroom training was slowly being devalued.

The next phase for me is now to look at delivery, control and management/availability of sources and then work out a cost vs. cost for each offering.

It’s interesting for me personally to see how diverse many of these virtual training subscriptions are in topics and gives me some more positives to work with.

Getting close and personal – pitfalls of long term consulting

I’ve been consulting for quite a number of years now – mostly in and around service delivery – so have been exposed to a number of different types of projects. Needless to say, both good and bad, however isn’t that why consultants are called in? If things were always easy and straight forward, would there really be a need to for a consultant? probably not, so you’ve got to take the good with the bad…and vice versa.

Most projects are of a short term nature, meaning, you’re called in to sort out a small specific piece of the puzzle, whilst rarely stay long enough to see a project reach completion.

Consultants come in many shapes and sizes, but the most common traits are:

  • Subject matter experts of various platforms, technologies or business related topics
  • Skills resource to subsidise excisting pool of resources

The main benefit you can get by taking up consulting is the width of exposure you get to a large number of industries, challenging scenarios and people. It’s a great way to build a huge repertoire of experience, in relatively short periods of time. The second benefit I see is that you don’t get terribly personally involved with the projects you’re working on – generally there really isn’t time to get attached.

But, once in a while you’ll end up either going back to the same project again and again, or you could stay with a project for years on end.

Here’s where the line between consultant and project member tends to blur. As most who’s dealt with developers would know, they’re a passionate bunch (and at times can have huge egos). A developer can become very attached to the project, and in relation also to the work they contribute. There’s a certain pride amongst developers in what they do (both good and bad) and I guess this pride is what often makes them excel in their chosen line of work. Basically if you don’t care your work often reflects it.

As a consultant you really cannot afford to become too passionate about things – and here’s where long term consulting can become rather bad. Whether or not you’re part of service delivery or planning, it matters not. But alas, consultants are merely human, and getting involved on a personal level will happen once you’ve spent years on a project – it’s inevitable really. The difference between the short and long term gigs are that you start to see yourself (read: the consultant) as a project member. You spend more time on the project and on-site, than you do at the company you’re working for.

Your client office/location is where pictures of your kids appear, where that favorite coffee mug has a permanent home in the cupboard etc.

Well, is this a bad thing? You’re a gainfully employed member of the industry – secure enough that you dare take out that car loan, settle on that mortgage, raise a family. All things that generally requiere stability.

I guess it depends greatly of where you’re sitting and the pros and cons has to be weighed up carefully.

Rather than going into the personal benefits of long term consulting, there’s a few points that can raise danger signs everywhere…

  • A blurred disconnect between employee and employer (note, not consultant and client)
  • A deep personal involvement by the consultant

These two points are very hard to compensate for. By being away from your “colleagues” and place of employment for long periods of time, a natural disconnect occurs between the employee and employer. Questions such as “Who do i work for?” will pop up more and more frequently. So a certain amount of investment has to be put into maintaining that cord of loyalty and feeling of belonging to a company. If that isn’t done, chances are that, that once the project is complete (or contract runs out, whichever occurs first), that the consultant would be amenable to other offers from possible competitors.

The second point, which i touched on briefly above, is just as dangerous – but more from a personal development perspective – if the consultant whishes to continue to be a consultant – this must simply never occur and it’s detrimental to maintaining a continued professional exterior. As a consultant you cannot have a too deep personal investment into the projects you’re contributing to or frustrations will bubble up and take over, again, making you amenable to seeking for other pastures.

So, the cold truth of long term consulting is that there’s very high probability of the consultant looking elsewhere for employment.

Supercomputing – What would you do with the worlds most powerful computer?

Fujitsu Limited created the worlds most powerful computer – It reached a staggering 10.51 petaflops (that’s 10 quadrillion calculations per second btw) using 705,024 SPARC64 processing cores. That’s 4 times faster than the second most powerful supercomputer in existence today. Since i started working with computers (and watching SciFi movies) the processing power has increased dramatically and we now have mobile devices with more processing grunt available than what most servers held back then.

Needless to say, some of the biggest discoveries today in science has happened purely because of the processing capability available. So, consider this, what would you do with the most powerful computer in the world? Cure cancer, locate the “god-particle”, map the universe?

It’s staggering to think of this accomplishment, let alone if you put it into perspective and compare it to what we, even today, expect from computers.

More info:


Fujitsu HPC LinkedIn group
Fujitsu Facebook page
Fujitsu Google+ page
Fujitsu HPC YouTube

Visual Studio 11 [Beta] – first impressions…

There’s been much hype…yes, much hype about the next install of Visual Studio. Every man and his dog has been complaining about the colours but from the development IDE that’s probably not the most important aspect. I’d would much rather see a 64-bit version available, but that’s another story altogether.

First off, install experience…

The download was 1.7Gb for the Ultimate edition and took me about 30-35mins on my home ADSL2+ (via Bigpond) so that alone was a huge improvement over last time we saw a release of Visual Studio. Back then, the download took me more than 24hrs (probably due to the lack of bandwidth MSFT had available).

The install was extremely easy – yes, this is a beta and things may change – however since I downloaded the Ultimate edition it should pretty much have everything enabled…

download .iso and right-click (Windows 8 Consumer Preview FINALLY has a native ISO mount capability) ->> mount.

Or you could alternatively use the ribbon above after you’ve clicked on the .iso file.

Then it’s just a matter of starting the installer..


It becomes pretty obvious that there’s a new design in town – gone is the sluggish and drab looking “Windows Classic” look that all previous installers have had. That the colours just so happens to be my favourite colours does help too 🙂 And it fits my desktop background and desktop theme perfectly too 🙂


One of the things that I like is attention to detail – the little “pellet animation” that I’ve personally come to associate with the Metro design (it’s obvious all over Windows Phone 7 and Windows 8) is a nice touch in the installer. That aside, the installation went smoothly and without a hitch (hang on, this is a beta release..why didn’t I have to try to install at least twice and add some obscure KB/Patch?? that’s not right!).

Just another quick detail – I love the finishing touch of the name that the final process has -> “ultimate_finalizer”…anyways, as can be seen below here, it was a success (if there was any doubt that is).

Once the choice of default development environment (choose C# Development Environment of course) the first thing that does indeed strike you is the UI. At first the difference was a bit distracting – probably because I wanted to soak in everything at once. I am a geek after all. But then familiarity settled in again. Much of what i’m used to seeing was there (for new features of VS11 I suggest you read through this MSDN article)

The look and feel is much less distracting than what it used to be. I’m not an UI expert but the look really does ring a bell with me very pleasantly. Content is master. A smooth grey theme is seen across the whole GUI, with very few distractions such as bright colours, taking focus away from what should be on every developers mind – productivity.

As you can see, the grey tones and the blue highlights are very soft. The contrast between sections, colours and context is really effective (again, does it follow a tried and tested/bested design paradigm? do I care? no). I spend a huge amount of time in front of a monitor each day, and I really hate having something stress my eyes and put a strain on my concentration.

Last night I spent a good 3 hours writing some code and getting used to Visual Studio 11. And even though this was in my preferred environment (eg. at home, comfortably in my own office) I didn’t come away with any kinks and I just about as productive as normal.

Enough of the colours (or lack thereof) though..

I noticed that there’s quite a few extensions shipped OOTB with this beta – they may not be shipped when it goes into production, but in all honesty I don’t see why the teams should bother removing any of them.

  • ASP.Net MVC 3 template packages
  • ASP.Net MVC 4 template packages
  • ASP.Net Web Pages 2 template packages
  • ASP.Net Web Pages template packages
  • ASP.Net WebForms template packages
  • Concurrency Visualizer
  • IntelliTrace
  • Nuget Packet Manager
  • PreEmptive Analytics Aggregator Visualizer
  • Visual Studio PerfWatson
  • VsGraphicsDebuggerPkg
  • Web Tooling Extensions

I’m particularly pleased to see Nuget Packet Manager be shipped OOTB.

Secondly – another aspect that I’ve never really understood not being included straight away – Team Foundation Server support is now also OOTB. Looks like it’ll be there from the get go as well. And it works just fine with TFS 2010.

Tried to connect to LIDNUG’s instance on and it worked a treat. One other thing, which I admittedly haven’t paid attention to in VS2010, was this little tid bit.

Yeps, in the lower right corner of the “Start Page” there’s an option set available. the one which surprised me was “Close page after project load” which is checked by default. No need to clutter your project screens with the start page. Excellent.

One of the applications that I’ve been using quite frequently of late is Visual Studio LightSwitch. Which also seems to be tagged into the Ultimate version of Visual Studio 11.

Now getting started in Visual Studio has frequently been a chore – especially the “New Project” dialogue, so I was keen to see how fast it loaded. And it was lightning fast to be honest. Half expected it to take yonks to load, but nopes, came up straight away.

Anyways, it’s time to crunch some code – will blog another one once I’ve played around with it some more. All in all, a definite improvement.