Deborah's Developer MindScape






         Tips and Techniques for Web and .NET developers.

Archive for VB.NET

June 4, 2014

"I Don’t Have Time for Unit Testing!"

Filed under: C#,Testing,VB.NET @ 12:50 pm

So, be honest, when you hear someone talk about unit testing … what is your first thought?

  • Is it: “I just don’t have the time to do unit testing”!
  • Or “Our management will never approve the time for unit testing”?
  • Or something similar?

Let’s look at what unit testing can do for you:

Save you Time

Yes, that’s right!

Throughout my career as a software developer, I have seen how much time unit tests can save.

Let’s think about this …

  • Any developer that has written more than a few lines of code knows that it needs to be run to verify that it operates as expected. Right?
  • You need to execute the feature to confirm that it takes appropriate inputs and produces appropriate outputs. You may need to go through this process several times.
  • And as you build more features, it requires more effort to manually test it.
  • For example, say you write a pedometer application that takes in a step count goal and an actual step count and then calculates the percent of goal you have reached.
    • To test the pedometer feature you need to execute the application, navigate to the appropriate feature, enter all of the required data, and then validate the results.
    • And if you find a bug you may have to repeat this process again, and again, and again.
    • For the feature you are working on now, how many times have you run the application to try it out? 10? 20? more?
  • The idea of an automated code test is that you can write code to perform that testing.
  • So if you want to test that the pedometer calculation is correct, you write an automated code test that defines appropriate inputs, calls the function under test, and verifies the results.
  • Then when the users want a change to that code (and you know that they will), you can make the change and just re-run the tests.
  • No need to slog through the UI again for each possible set of inputs.

This can save you LOTS of time.

Help you Find Bugs Faster

You just received a bug report. Something in the code does not work.

Instead of trying to reproduce the problem by wading through a bunch of UI, you can instead use your unit tests to quickly reproduce, find, and fix the error.

Allow you to Refactor Safely

So you are working on some code that is just too painful to deal with.

You’d like to apply some refactoring techniques to make the code cleaner and easier to use.

With no tests, you are running a risk of introducing bugs if you rework the code.

If you have unit tests in place, you can safely perform a refactoring because when you are done with the refactoring, you can rerun the tests to confirm that all is well.

Readily Add Features

You can add new features and rerun all of the existing tests to ensure that the new feature does not adversely impact any existing features.

No more worry that adding feature b will adversely affect feature a.

Minimize Those Annoying Interruptions

So you are in the middle of coding the next feature and you have to drop everything because someone entered something bad somewhere in your application and now it is crashing.

Bummer!

Having a good set of unit tests can minimize those annoying interruptions.

Enhance your Value

Don’t you just hate it when the “QA” person emails you to let you know your code doesn’t work?

This is especially difficult if that “QA” person if your boss or your client.

Having a good set of unit tests can make you look like a coding master!

Or what if the person after you changes something and now it looks like your code doesn’t work.

Unit tests help the developers that come after you to better understand and modify your code. They can re-run the tests to ensure the code still works after their changes.

Having a good set of unit tests verifies that your code works over the lifetime of the application.

Conclusion

Writing unit tests isn’t hard or time consuming once you get into the habit. And these benefits look pretty good!

For a gentle introduction to automated code testing, see my Pluralsight course: “Defensive Coding in C#”.

Enjoy!

PS circleCheck out my Pluralsight courses!

Code Quality and Automated Code Testing

Filed under: C#,Testing,VB.NET @ 12:31 pm

I’ve heard it said that the top three techniques for improving code quality are:

  • Unit testing
  • Unit testing
  • Unit testing

There is no better defense for the quality of your code than a set of automated code tests.

Automated code testing involves exercising code and testing its behavior by writing more code. So you have a set of code that tests your original code.

The goal of unit testing is to isolate each unit of code in an application and verify that the unit of code behaves as expected in both valid and invalid conditions.

To achieve this goal, we can:

  • Refactor our code where necessary into individual units (methods) that can be tested.
  • Create a set of tests for each method.
    • Tests with valid inputs.
    • Tests with invalid inputs.
    • Tests that could produce exceptions.
  • Execute those tests using a testing framework, such as MSTest or NUnit, both of which are executable from with Visual Studio (ALL editions, including the free Express edition!)

Don’t have time to test? See this post!

For a gentle introduction to automated code testing, see my Pluralsight course: “Defensive Coding in C#”.

This is what one reviewer said about the “Automated Code Testing” module of this course:

This module is an excellent introduction to unit testing with C#!

In fact it should be recommended to C# subscribers as the first place to go to learn about unit testing, before they take any of the .NET unit testing courses in the library. For many this is all they will need,

It takes a viewer on a clear path from zero-knowledge about unit testing to being able to doing useful, real development, unit testing in 45 minutes.

It does a very good job of covering both the mechanics and how to make practical use unit testing.

Enjoy!

PS circleCheck out my Pluralsight courses!

May 16, 2014

What is Defensive Coding?

Filed under: C#,Testing,VB.NET @ 10:40 am

From Wikipedia (as of 4/14/14):

… an approach to improve software and source code, in terms of:

General quality – Reducing the number of software bugs and problems.

• Making the source code comprehensible – the source code should be readable and understandable so it is approved in a code audit.

• Making the software behave in a predictable manner despite unexpected inputs or user actions.

Let’s consider each of these bullet points…

General quality

Coding defensively means to actively code to reduce bugs. One of the key techniques for improving quality is through automated code testing.

Don’t know that you have time in your project schedule for automated code testing? That’s a topic for another blog post. Or check out my “Defensive Coding” course referenced at the bottom of this post for a demonstration of some simple automated code testing techniques and a discussion of the “no time for testing” issue.

Comprehensible

It is not just computers that need to read and understand your code … people need to read and understand it as well.

If another developer doesn’t understand your intent, they may make incorrect assumptions about that code and make inappropriate code changes … causing your code to fail.

Plus if the code is easy to read and understand, it will be easier and less time consuming to modify as the application is maintained or enhanced over time.

The key to making source code more readable and understandable is by building “Clean Code”. The concept of “Clean Code” was first presented by Robert Martin in his book: “Clean Code: A Handbook of Agile Software Craftsmanship”.

The cleaner your code is, the easier it is to understand, maintain, and test.

Predictable

Predictable code should handle unexpected inputs or user actions by anticipating them and responding accordingly.

This includes techniques such as guard clauses, validation, and error handlers.

Putting these three concepts into a picture summarizes the goals of defensive coding:

image

For more information on Defensive Coding, see my Pluralsight course: “Defensive Coding in C#”.

Enjoy!

PS-circle22

 

Check out my Pluralsight courses!

May 7, 2014

Does Your Code Feel Like a Ball and Chain?

Filed under: C#,VB.NET,Visual Studio @ 11:06 am

Do you ever work on code that is disorganized; that was too often written the “quick” way instead of the “right” way? You try to make one little change or addition and the existing code makes everything difficult. The code feels like a ball and chain, dragging you down and slowing your progress.

image

If you have experienced this phenomenon, you have seen the results of technical debt.

Pluralsight just released a new course from Mark Heath entitled “Understanding and Eliminating Technical Debt“. It explores what technical debt is, the problems it causes, and how you can identify and quantify it. Then, the important bit, it shows you how to create an action plan to address the technical debt and provides some practical techniques for repaying it.

image

I’ve just watched this course and HIGHLY recommend it for any developer.

Enjoy!

PS-circle2

 

Check out my Pluralsight courses!

February 18, 2014

I’m Looking for a .NET Project

Filed under: C#,VB.NET,Visual Studio @ 2:16 pm

I recently completed a .NET project for a client I had been working with since November of 2009 … over 4 years.

It was great fun to help them build their system: client Point of Sale, Silverlight (MVVM/XAML) management package, Web API/JQuery customer facing application, WinForms support tool, over 2100 automated code tests and all of the parts in between. I will greatly miss this code (and my wonderful client of course)!

I am looking for my next .NET project, large or small. I am great at turning nebulous requirements into a successful application. I have experience working in a team or as the sole developer; training and mentoring developers; refactoring existing or building new applications; and using an iterative development approach for a quicker time to market. And I’ve recently been working with JavaScript frameworks such as AngularJS, Backbone and Knockout.

I am located in the Silicon Valley/San Francisco Bay area, but can telecommute to anywhere. (My last several clients required telecommuting, including one in northern California and one in eastern Canada.)

Pass along this link if you know of someone that could use an experienced .NET developer / consultant on a full-time, part-time or one-time basis.

Thanks!

Email to deborahk at insteptech dotcom

February 17, 2014

Technical Debt of Using Code Behind

Filed under: C#,VB.NET,Visual Studio @ 6:25 pm

I recently wrote a post entitled "Why Use Code Behind?". It outlined several reasons why .NET developers use code behind for the logic of their C# or VB.NET applications, not just for UI management.

This post looks at the down side of using code behind for application logic … in two words: Technical Debt.

From Wikipedia:

"The debt can be thought of as work that needs to be done before a particular job can be considered complete. If the debt is not repaid, then it will keep on accumulating interest, making it hard to implement changes later on. Unaddressed technical debt increases software entropy."

From Ward Cunningham, 1992 (as quoted in Wikipedia):

"Shipping first time code is like going into debt. A little debt speeds development so long as it is paid back promptly with a rewrite… The danger occurs when the debt is not repaid. Every minute spent on not-quite-right code counts as interest on that debt."

Bottom line … using code behind for the logic of an application adds to your technical debt.

Some situations, such as when building a Minimum Viable Product, incurring technical debit is a logical choice. Why not incur debt if the team is not even sure if they are building the right thing?

In other situations, the team may not be aware of the technical debt they are accumulating. Not until the application complexity, testing process, or bug count become unacceptable, or worse the project fails.

Two areas where technical debt becomes apparent are maintenance and testing.

Maintenance

In most cases, writing the initial code is only but a fraction of the lifetime of that code. The vast majority of time is spent on extending, enhancing, modifying, or fixing it.

Writing application logic in the code behind makes maintenance much more difficult (adding interest on the debt). Code behind:

  • Prevents Reuse. For example, say a developer has code in code behind that opens a file. If a later feature also needs that same logic, it will need a copy/paste to reuse it.
  • Encourages Duplicate code. See the prior example. And if a bug is later found in a routine, will the bug in the copies be found and fixed?
  • Adds Complexity. Instead of working with many small routines that each perform one task, code behind often leads to writing very long methods that contain all of the logic required for a particular event. For example, a Save button’s code behind might perform validation, create a transaction, save to the database, generate an email, and print a receipt.
  • Prevents Automated Code Testing. See the next section.

By writing logic in components, those components can be reused, preventing duplication. And the methods within the components can be single-purpose, making them less complex and easier to maintain and test.

Automated Code Testing

Writing code without unit tests adds to the interest on the technical debt … and not like a home mortgage 3 or 4% interest … more like a credit card’s 35% interest or more. Let’s look at why…

A developer builds a feature and provides it to someone else for testing. That tester may spend many hours testing all of the possible paths through the code. The code is released and work starts on the next set of tasks.

When that work is finished, the tester needs to test every single path again:  all of the original code paths plus all of the new ones.

In most cases, the developer ran over the schedule, so the testing schedule is cut. The tester needs to test more in less time. And over time, the number of paths to test grows.

Using the built-in MSTest tools or any testing tools compatible with Visual Studio, such as NUnit, you can easily create automated code tests for your logic. But not if that logic is hidden inside a code behind file.

By writing the code in components, the developer can use the automated code testing tools within Visual Studio to test the application logic. And more exciting … the developer can easily retest all of the original functionality with those tests as the code changes over time.

Your thoughts?

February 12, 2014

My Latest Course Went Live Today!

I am happy to announce that my latest Pluralsight course: "Visual Studio Data Tools for Developers" went live today!

As C# or VB.NET developers, we often need to work with a database.

This course covers how to:

  • Use the many SQL Server Data Tools (SSDT) in Visual Studio
  • Manage SQL Server databases with Visual Studio
  • Build database scripts (including data scripts) with a Database Project
  • Publish Database Scripts
  • Unit Test stored procedures
  • Generate a DACPAC
  • Use a DACPAC to deploy database changes

You NEVER have to write database change scripts again!

Check it out and let me know what you think! Feel free to leave comments here or in the Discussion section for the course.

http://www.pluralsight.com/training/Courses/TableOfContents/visual-studio-data-tools-developers

image

Enjoy!

February 11, 2014

Why Use Code Behind?

Filed under: C#,VB.NET,Visual Studio @ 2:36 pm

There are many .NET developers across the full range of skill levels that are using code behind for the logic of their C# or VB.NET applications, not just for UI management.

This post looks at the possible reasons for this. It would be great to get your thoughts on this topic. (Use the comments section below so the conversation is visible to everyone.)

NOTE: Depending on your UI technology, it makes sense to put code that manages the UI into the code behind. However, there are many alternate places to put the logic of the application.

From what I have seen, using code behind for all application logic normally stems from one (or a combination) of the following:

  • Quick and Easy
  • Tooling
  • Work Environment: Management
  • Work Environment: Work Space
  • Minimum Viable Product

Quick and Easy

If a developer needs to build a quick application for the support staff to update customer types or view log transactions, that developer wants to spend the least amount of time possible. There are other things that directly affect the customers that are much more important to spend time on.

Using code behind is quick and easy. Just double-click on a UI element in a Visual Studio designer, write code, and run. It’s done!

Tooling

By their very nature, the Visual Studio designers are set up for code behind. And many Visual Studio demonstrations show off the cool designer tooling.

New .NET developers often learn to program using these designers with code behind. And even as they progress with their knowledge and tackle more complex applications, they never get out of the "double-click and write code" habit.

Work Environment: Management

Maybe the manager is not technical or has not kept up with current software development life cycle (SDLC) techniques. Or maybe the manager has seen one too many Visual Studio "double-click and write code" demonstrations and has seen how "fast" developers can write code.

In either case, they may only be focused on getting features out quickly. If so, developers may feel that code behind is all they can do in the time allocated.

Work Environment: Work Space

Do the developers work in an office with constant interruptions? Do they get a "ping" every time they get a new email, Facebook post, or tweet? Do they have to provide immediate customer support? Do they work at home in the family room with the kids asking for homework help and the significant other watching TV?

Developing an application using only code behind often requires much less concentration. Just double-click to write the code, double-click to open the code … and the code is all there.

Building applications that use components, base classes, inversion of control, services, patterns, and so on require significantly more thinking. And as such, are very challenging to work with in an interrupt-drive environment.

See this post for some interesting information on software developer interruptions.

Minimum Viable Product

Minimum viable product is a agile strategy that focuses on getting the minimum set of software features in place to allow the product to be deployed and nothing more. The customers can then provide feedback and the software can be refactored and adjusted as necessary.

Depending on the selected technologies, using code behind for all of the application logic may be the quickest way to a minimum viable product. Refactoring to components, interfaces, and other technologies come later.

Are there other reasons? Your thoughts?

Deploying a DACPAC with DacFx API

There are several different tools that you, the DBA, or another individual can use to deploy a DACPAC as defined in this prior post. This current post details how to deploy a DACPAC using the DacFx API.

The DacFx API is a set of classes that you can use in your code to perform operations on a DACPAC. This allows you to write your own DACPAC utility application or include DACPAC functionality in any application.

DacFx API Version 3.0 is defined in Microsoft.SqlServer.Dac, which is a different DLL than prior DacFx API versions. Along with a new DLL, the functionality in Version 3.0 changed significantly from prior versions. The information in this post is for Version 3.0 and won’t work with prior DacFx API versions.

NOTE: DacFx 3.0 can work with DACPACs from older versions, but only generate Dac 3.0 formats.

Why would you want to write your own code to process a DACPAC when you can deploy a DACPAC with existing tools?

  • You can completely control the target connection and database(s) used, the DACPAC that is used, and the deployment options.
  • You can repeat the processing of the DACPAC for multiple databases.

For example, say you have a set of testers, each with their own copy of the database so they can better verify their results. You can store their connections in a table or configuration file. Then write a DACPAC utility application that loops through each connection and deploys the DACPAC to each tester’s database in one operation.

The code below is in C#, but this technique works in VB.NET as well.

using System;
using System.Collections.Generic;
using Microsoft.SqlServer.Dac;

namespace DacpacUtility
{
    public class DacpacService
    {
        public List<string> MessageList { get; set; }

        public DacpacService()
        {
            MessageList = new List<string>();
        }

        public bool ProcessDacPac(string connectionString,
                                    string databaseName,
                                    string dacpacName)
        {
            bool success = true;

            MessageList.Add("*** Start of processing for " +
                            
databaseName);

            var dacOptions = new DacDeployOptions();
            dacOptions.BlockOnPossibleDataLoss = false;

            var dacServiceInstance = new DacServices(connectionString);
            dacServiceInstance.ProgressChanged +=
              new EventHandler<DacProgressEventArgs>((s, e) =>
                            MessageList.Add(e.Message));
            dacServiceInstance.Message +=
              new EventHandler<DacMessageEventArgs>((s, e) =>
                            MessageList.Add(e.Message.Message));

            try
            {
                using (DacPackage dacpac = DacPackage.Load(dacpacName))
                {
                    dacServiceInstance.Deploy(dacpac, databaseName,
                                            upgradeExisting: true,
                                            options: dacOptions);
                }

            }
            catch (Exception ex)
            {
                success = false;
                MessageList.Add(ex.Message);
            }

            return success;
        }
    }
}

Walking through this code:

  • A MessageList property retains any messages generated by the process. The code using this class can display the contents of this list.
  • The constructor initializes the MessageList.
  • The only method in this class deploys a DACPAC file.
  • The parameters to the method define the appropriate target connection, target database, and DACPAC path and file name.
  • The process kicks off with a starting message in the MessageList.
  • If desired, you can define deployment options. In this example, the only option that is set is the BlockOnPossibleDataLoss.
  • An instance of the DacFx DacServices class is then initialized.
  • Optionally, you can elect to respond to ProgressChanged events. These events are invoked when the state of the operation changes. In this case, any ProgressChanged message are added to the MessageList.
  • Optionally, you can elect to respond to Message events. These events are invoked when an operation reports status updates or errors.
  • Within a Try block, the code loads the DACPAC using the DacFx DacPackage class. The argument is the full path and file name to the DACPAC file.
  • Finally, the DACPAC is deployed using the Deploy method of the DacServices class.

To see how this method is called, here is an automated code test:

using DacpacUtility;
using Microsoft.VisualStudio.TestTools.UnitTesting;

namespace DacpacUtility.Test
{
    [TestClass()]
    public class DacpacServiceTest
    {
        public TestContext TestContext { get; set; }

        [TestMethod()]
        public void ProcessDacPacTest()
        {
            // Arrange
            var target = new DacpacUtility.DacpacService();

            // Act
            var success = target.ProcessDacPac(
                @"Data Source=.\sqlexpress;Integrated Security=True;",
                 "TestACM",
                @"<path>\ACM.Database.dacpac");

            // Assert
            Assert.AreEqual(true, success);

            // Display the messages
            foreach (var item in target.MessageList)
            {
                TestContext.WriteLine(item);
            }
        }
    }
}

Walking through this code:

  • A TestContext property is defined to write out to the TestContext.
  • The arrange process sets up the instance of our DacpacService class.
  • The act process calls the ProcessDacPac method of our DacpacService class and passes the appropriate arguments. Be sure to replace <path> with the path to your DACPAC.
  • The assert process asserts that the deployment was successful.
  • It then writes out all of the messages.

For this example, the messages appear as follows:

image

Notice that there is some message duplication here. That is because we put both the Messages and the Process Changed text in the list.

Enjoy!

For more information on this and other SQL Server Data Tools (SSDT) features, see my latest Pluralsight course: "Visual Studio Data Tools for Developers", which you can find here.

February 10, 2014

Deploying a DACPAC with PowerShell

There are several different tools that you, the DBA, or another individual can use to deploy a DACPAC as defined in this prior post. This current post details how to deploy a DACPAC using Windows PowerShell.

Windows PowerShell is a task automation and configuration management framework that aids in performing Windows administrative tasks.

Here is a script that will deploy a DACPAC.

add-type -path "C:\Program Files (x86)\Microsoft SQL Server\110\DAC\bin\Microsoft.SqlServer.Dac.dll"

$dacService = new-object Microsoft.SqlServer.Dac.DacServices "server=.\sqlexpress"

$dp = [Microsoft.SqlServer.Dac.DacPackage]::Load("<Path> \ACM.Database.dacpac")

$dacService.deploy($dp, "TestACM", "True")

image

The first line adds the Dac DLL to the PowerShell session. Add-Type is a Utility Cmdlet that adds any .NET Framework type to a PowerShell session. Change the directory as appropriate for your system.

The second line creates an instance of the DacServices object and defines the SQL Server instance for the connection. In this example, we are using SqlExpress.

The third line loads the DACPAC. Be sure to change <Path> to the path of your DACPAC and that it is all on one line.

The last line performs the deployment. The first argument is the loaded DACPAC. The second argument is the database that is the target of the deployment. The third argument is whether to allow update of an existing schema. This is "True" because we want to allow this script to upgrade the TestACM database if it already exists.

For more information on using PowerShell with DACPACs, see this blog post.

Enjoy!

For more information on this and other SQL Server Data Tools (SSDT) features, see my latest Pluralsight course: "Visual Studio Data Tools for Developers", which you can find here.

Next Page »

© 2023 Deborah's Developer MindScape   Provided by WPMU DEV -The WordPress Experts   Hosted by Microsoft MVPs