I’ve been experiencing with Micro ORM products on the market in the open source realm as of recent.  Two of the products a colleague of mine recommended were PetaPoco and Dapper.  In researching the two, I really liked some of the features of PetaPoco, and hence I decided to go with this product instead.  PetaPoco’s implementation is simple; download the package from GitHub or Nuget and install it on your machine.  PetaPoco comes in the form of 3 T4 templates (one master and 2 related templates) to generate the PetaPoco components (which you have full access to) and the data access components that map to your database.  Since these are T4 templates, you have control over the customization.

Because I wanted to optimize some of the speed performance improvements of PetaPoco, some of the convenience features are gone, such as automatically generating the select parameters, named parameters for queries and stored procedures, etc.  This means I’ll use the select option like the following:

new Sql()
.Select(“*”)
.From(“Users”)
.Where(“IsActive = 1”)

However, instead of defining * as a convenience, I wanted to include all of the parameters names since * can perform slightly worse than named parameters; however, defining each column is harder to keep track of as table definitions change.  Therefore, the alternative I chose was to add some additional methods to the template, to handle this for me.

Before we get to that, let’s look at what PetaPoco requires.  In my project, there are three templates, the first is the master template named Database.tt.  It’s not critically important to understand what goes on in here except for the first section which has common settings:

// Settings
ConnectionStringName = “DB”;
Namespace = “My.DataAccess”;
RepoName = “MyContext”;
GenerateOperations = true;
GeneratePocos = true;
GenerateCommon = true;
ClassPrefix = “”;
ClassSuffix = “”;
TrackModifiedColumns = true;

ConnectionStringName is important and must match a connection string defined within the app.config of the project.  Namespace determines the namespace o the components, and RepoName is the name of the custom database class generated.  PetaPoco uses an approach similar to LINQ to SQL or Entity Framework, where the core DataContext/ObjectContext classes are inherited from in the designer, creating a new, customized context class.

The additional options control whether to generate certain pieces of code (the poco component, common method operations, etc.), and how to modify the POCO’s being generated.  The shell of the custom database class appears below:

namespace My.DataAccess
{
public partial class MyContext : Database
{
public MyContext() : base(“DB”)
{
CommonConstruct();
}
}

The next template we’ll look at is PetaPoco.Generator.include, which contains the definition of the POCO objects.  It’s this template that we can customize the process to add additional features, and make the process smooth.  At the top is the T4 template for generating the custom Database class as shown above.  Later on below is the template for each POCO, which looks like this:

<# if (GeneratePocos) { #>
<#
foreach(Table tbl in from t in tables where !t.Ignore select t)
{
#>
[TableName(“<#=tbl.Name#>”)]
<# if (tbl.PK!=null && tbl.PK.IsAutoIncrement) { #>
<# if (tbl.SequenceName==null) { #>
[PrimaryKey(“<#=tbl.PK.Name#>”)]
<# } else { #>
[PrimaryKey(“<#=tbl.PK.Name#>”, sequenceName=”<#=tbl.SequenceName#>”)]
<# } #>
<# } #>
<# if (tbl.PK!=null && !tbl.PK.IsAutoIncrement) { #>
[PrimaryKey(“<#=tbl.PK.Name#>”, autoIncrement=false)]
<# } #>
[ExplicitColumns]
public partial class <#=tbl.ClassName#> <# if (GenerateOperations) { #>: <#=RepoName#>.Record<<#=tbl.ClassName#>> <# } #>
{

PetaPoco can use attributes to identify the DB table name and primary key column name, if that option is enabled.   The partial class definition is where the meat of the generation options are.  This is where I added 3 additional items.

public static string GetPrimaryKeyName()
{
return “<#=tbl.PK.Name#>”;
}

public static string GetTableName()
{
return “<#= tbl.Name #>”;
}

public static string[] GetColumns()
{
return new string[]
{
<#
for(int i = 0, len = tbl.Columns.Count; i < len; i++)
{
Column col = tbl.Columns[i];
if (!col.Ignore)
{
#>
“<#= col.PropertyName #>”<#= (i != tbl.Columns.Count – 1 ? “,” : “”) #>
<#
}
}
#>
};
}

The first method creates a static reference to the name of the primary key.  The second option creates a static reference to the name of the table, and the last option retrieves all of the names of the columns as an array.   The benefit to this option is it’s not reflective, it’s generated with the code gen and therefore compiled and not evaluated at runtime, making it still a pretty fast operation.  And I can update my SQL statement like the following:

new Sql()
.Select(Users.GetColumnNames())
.From(Users.GetTableName())
.Where(“IsActive = 1”)

And, if I want to use the Save override that takes the table name and PK name, I can use this:

var db = new MyContext();
db. Save(users.GetTableName(), users.GetPrimaryKeyName(), poco);

To go even further, for selecting, we can generate the shell select statement as such:

public static Sql GetSelectSql()
{
return new Sql()
.Select(GetColumns())
.From(GetTableName());
}

This makes it even easier to craft your select statements. You would have to use these options if you disabled certain features (like auto generation of select statements) of the database to improve performance.  I hope this helps illustrate how we can use code generation to help improve our applications.

For a project I was leading at the Baltimore Give Camp in October, one of my teammates suggested we use Trello, a task management web application available online.  I immediately saw its usefulness, and wanted to write this blog post as a means to explain how I use trello for my own various purposes.  Keep in mind Trell0 can be used to track anything; it can be used for development (which is what I’m writing about today), but could also be for tracking your time, your personal schedule, for writing a book, managing your life, or whatever else you may want to track.

Trello works with the concept of a board; on the board are a group of lists, which each has a group of cards.  Working our way up, a card is a task.  The card contains everything related to the task, including notes appended to the tasks, due dates, color coding, etc.  The card resides on the list, which categorically categorizes cards by some sort.  For instance, a list could categorize tasks by its state (not completed, in progress, completed), by location (US, Europe), or any other designation you like.  A list can have many cards, and cards can be transferred from one list to another.  For instance, we can have a list of completed tasks, a list of in-progress tasks, etc, all within a board (which the board could represent your project, for example).

A sample board looks like the following:

Trello Task List

In this sample project organization, as tasks come in, they enter the To Estimate list.  Items estimated that are ready for coding are dragged to the To Code list, and then eventually get assigned to a developer or DBA.  Items completed in development appear on the Development Complete list, which then makes its way to the UAT testing list, followed by the UAT complete, and Production lists (omitted from the list).  As each item is completed, someone on the project can drag and drop items from left to right.

The great thing about Trello is the visual representation it gives you of the life of your application; with one glance, you can see where the status is of all your tasks, and get a feel for the progress you are making.  And because boards are security-driven, other members of the team can keep the board up-to-date in real time.  The project manager knows what tasks are being worked on, the testers can see what tasks are in their testing responsibilities, the developers know which items they must complete, and items can be reordered in the list to meet the client’s priorities.

I personally have used trello for my personal management; currently I am using Trello to manage several of my personal projects, including my Nucleo framework.  I am also working on integrating Trello on a team project as well.  Trello doesn’t have everything for task management that people like to have, and while some of those features will be coming in the future, Trello has a lot to offer any organization, whether a development project, a web design company, or an open source project.  Outside of task management, a trello board can be opened up to the world, meaning that it provides a means to communicate task items being worked on, and allow users to vote on the tasks they personally would like to have in your product.

Trello can be found at http://trello.com.

I am setting up a new virtual machine with Windows 7 and installed all updates.  I installed Visual Studio 2010 in about an hour, and followed up by adding on ASP.NET MVC 3 install.  My next task was to install SP1, which has taken 4 hours to install.  Most of the slowness was the download because I recently moved and we signed up for the basic internet package (up to 1 MBPS download) for now.  But by doing so, turned into what was expected to be a 45 minute install into a 4 hour install.  I was hoping it to be faster, but that’s what you get when you go for the basic DSL package 🙂  In the past, I had a faster package and did not experience this time lag.

I thought it might have been an issue with this: http://blogs.msdn.com/b/heaths/archive/2011/03/02/visual-studio-2010-service-pack-1-installing-for-over-2-hours-could-be-a-sign-of-a-problem.aspx  However, I followed the steps (had to copy the .tmp.html and paste it in the folder to read it because the file was locked), and the WM_ACTIVATEAPP line that blog mentions was not present.  So the issue for me was completely download speed.

Just passing this on in case it may help someone to make a determination.

I figured I’d write a blog detailing the nightmare scenario I encountered in regards to Entity Framework and Stored Procedures in SQL Server, and trying to get Entity Framework to generate the metadata.  It all started when creating a complicated search stored procedure, which included subqueries, inner and outer joins, tables variables, etc.  For some reason, Entity Framework wouldn’t generate the complex type associated with the stored procedure result set.  Everytime I tried to create the function import, I get the error: “The Selected Stored Procedure Returns No Columns”.

Doing some googling found that Entity Framework 4 runs the command SET FMTONLY ON before every stored procedure call.  This prevents the stored procedure from actually running against the database, potentially trying to insert data erroneously.  But this command, in some if not many/all scenarios, screws up all of us developers in running stored procedure imports.  So I went through the cyclical process:

  • Script the proc as an alter statement in SQL Server Management Studio (SSMS)
  • Add SET FMTONLY OFF in the proc body (after the as keyword)
  • Run it
  • (I can’t remember if I had to delete/recreate the proc in EF at the moment)
  • Ensure the complex type/function import doesn’t exist.  Delete if it does.
  • Go to EF model, right-click the stored procedure to import, select Add Function Import.  Select Create New Complex Type, and it works, no “The Selected Stored Procedure Returns No Columns” message.
  • Remove the SET FMTONLY OFF line from the proc, and reexecute the alter statement in SSMS.
Table variables get interesting.  Reading in another blog, if a table variable is selected from, the SET FMTONLY OFF is not needed; in fact, in one example, using SET FMTONLY OFF caused the “no columns” message to appear!  So leaving the default behavior worked OK, and I could do this just fine:
create procedure dbo.CustomersSelectAll
as
declare @table table
(
    — fields
)
insert into @table
select — long complex query
select * from @table
whereas, without the table variable, I needed:
create procedure dbo.CustomersSelectAll
as
set FMTONLY off
select — long complex query
Now we get to the most interesting part: when it comes to selecting data from another database, or through a synonym pointing to another database, entity framework doesn’t work at all, unless you use the table variable approach.  Without using a table variable, specifying set FMTONLY on or set FMTONLY off fails to work on either accord. The only way I could even get it to work was to insert the results into a table variable, and send the table variable results to the caller.
Has anyone experienced this personally?  Have you managed to get a cross-database stored procedure to work without a table variable?  The result set was so small that it was OK to use the table variable without any performance degradation.  However, it would be good to understand why this is an issue, so if anyone can leave a comment, I would appreciate it.

Controls like the LinkButton control in web forms all provide a postback mechanism to post data to the server through a client-side JavaScript method called __doPostBack (or alternatively WebForm_DoPostBackWithOptions).  This method takes the unique ID of the control as the first argument, and a combination of command name and/or argument as the second parameter (concatenated together if desired).  The recipient of this post is the IPostBackEventHandler.RaisePostBackEvent method.  As long as the control targeted defines this interface, it will receive that it was the target of a postback to the server.  Internally, it knows this by using the __EVENTTARGET form key, and the __EVENTARGUMENT form key for the command name and/or arg.

This interface is not just for controls, but a quick option you can use for a user control.  This way, if you need to have the user control postback and handle the postback event directly, this approach can be very handy.  Simply define a user control like the following:

public partial class MyUserControl: System.Web.UI.UserControl, IPostBackEventHandler

Handle the RaisePostBackEvent like so:

public void RaisePostBackEvent(string eventArgument)
{
if (eventArgument.Contains(“Select”))
{
//Process select command
}
}

And inside the user control, have a reference to this somewhere.

__doPostBack(‘<%= this.UniqueID %>’, ‘Select’);

The this.UniqueID reference means that inside the user control, grab its unique ID.  It can be used outside of the user control by referencing userControlInstance.UniqueID too.  So there are many options here.

MSDN documentation can be found here: http://msdn.microsoft.com/en-us/library/system.web.ui.ipostbackeventhandler.raisepostbackevent.aspx

JQuery provides a superb way of easily finding and manipulating elements.  But, if you don’t watch yourself, it can quickly become  quite dangerous and easily be broken.  For intance, imagine this file structure, with some example elements listed as its children.

Page
UC 1
– div class=”Target1″
UC2
– div class=”Target2″

and so on.  If the page were to do something like:

$(“.Target2”).find(“div.Child”).css(“display”, “none”);

You can imagine what would easily happen over time.  Some other developer modifies the page and removes that DIV completely because they didn’t even know about it.  As user controls get shared across pages, people may accidentally make a modifications to the user control in other pages, and as such, it breaks the page you were working in.

This is why I’m an advocate for utilizing JQuery plugins or widgets as much as possible.  The UC could create its own widget that registers itself with the page.  The page can then call the UC’s methods or do whatever it needs to do, instead of working with the internal elements directly.  Otherwise, you may find yourself  with broken code later on.  If you don’t like the plugin/widget approach, just try at least to keep all of the JS code that’s related to this user control within the user control’s body, or in a script file where it won’t be easily lost.

Anyone who develops on the web these days develops with JavaScript.  As applications become richer in capability, they need to rely on the functionality JavaScript has to offer, or seek another solution like Silverlight or WPF.  As ASP.NET web forms or MVC applications grow in size, functionality gets separated into separate classes, pages (views), user controls (partial views), etc.  When it comes to JavaScript, scripts can get separated between script in a page, script in a user control, script emitted from code or embedded JavaScript files, or even dynamically loaded JavaScript.

It’s easy to write JavaScript, but it can be hard to write it in a way that can be easily maintained inside an ASP.NET page that has 10-20 user controls.  One user control may use JQuery to find a control in another user control, and that user control talks to another user control.  It all works because at the time of coding everything was setup with a specific hierarchy.  But as soon as the user adds functionality or changes their mind on certain functionality, the structure changes, leaving the developer taking over the work with an array errors and a whole lot of mess.

Here are some tips to make these interactions across user controls more managable.  First, it may be best to control these interactions through a plugin.  For instance, we can start with a component in the page:

if (typeof(window[“$p”]) === “undefined”) {
window[“$p”] = (function() {
var ctls = {};

return {
addControl: function(name, ctl) {
ctls[name] = ctl;
}
};
})();
};

Within the user control, the user control can interface with the page by adding itself to the list of child controls of the page.

window[“$uc1”] = (function() {
return {
init: function() {
if (typeof($p) !== “undefined”)
$p.addControl(“search”, this);
},

//set of functions to manage features of the user control
};
})();

The samples above would need to be tweaked some but this is a rough implementation.  Anyway, since we have an established hierarchy, the page can drive features within its user controls, or the user control can communicate certain actions and events by referencing the page through $p variable.

Loading Panels

For instance, with a loading panel that appears over the page, the page class can define methods to show and hide the loading panel, as in the following (assuming the display div is styled via CSS).

window[“$p”] = (function() {
return {
.
.

showPanel: function() {
$(“#display”).show();
},

hidePanel: function() {
$(“#display”).hide();
}
})();

Events

Sometimes an event in the user control, the page needs to know about the event.  We can use the ASP.NET AJAX events framework via Sys.EventHandlerList component to register and fire events.  We can add events and methods to the user control as in:

window[“$uc1”] = (function() {
var evt = new Sys.EventHandlerList();
return {
.
.

add_itemSelected: function(h) {
evt.addHandler(“itemSelected”, h);
},

remove_itemSelected: function(h) {
evt.removeHandler(“itemSelected”, h);
},

_onitemSelected: function(e) {
var h = evt.getHandler(“itemSelected”);
if (h) h(this, e);
},

init: function() {
//Add control event handler
//in event handler, fire _onitemSelected
}

//set of functions to manage features of the user control
};
})();

The page can attach an event handler and receive the bubbled notification.  Or, a callback or observer pattern could be used here too.

When using the entity objects generated by the Entity Framework, it’s tempting to utilize the drill-through properties available.  For instance, suppose you had this code sample (ctx is a reference to the ObjectContext).

var users = from u in ctx.Users

where u.IsActive == true

select u;

foreach (var u in users)

{

var recentPassword = u.UserPasswords.FirstOrDefault(i => i.IsActive == true);

if (recentPassword != null)

{

//Do something

}

}

It’s important to break this down to determine what’s happening with this code, minus the query (which is the simple part).

foreach (var u in users)

Upon looping, this first iteration actually executes the database query behind the scenes.  Simple enough.

var recentPassword = u.UserPasswords.FirstOrDefault(i => i.IsActive == true);

Here is where we have to be careful.  A reference to User.UserPasswords, which UserPasswords is a collection of children of the User entity, loads up all UserPassword objects related to that user, and then after all objects are loaded, the first object is retrieved that’s active.  So for each active user, we are loading all passwords from the database.  If we were to do:

var users = from u in ctx.Users

let rp = u.UserPasswords.FirstOrDefault(j => j.IsActive == true)

where u.IsActive == true

&& rp.IsActive == true

select u;

This statement translates to a SQL query with a subquery, and in this subquery, the passwords related to the user are queried.  All of the filtering has been done before us, and as long as we don’t need to access the UserPasswords directly, then this can suffice.  If we need the most recent record, we could also create an anonymous record:

var users = from u in ctx.Users

let rp = u.UserPasswords.FirstOrDefault(j => j.IsActive == true)

where u.IsActive == true

&& rp.IsActive == true

select { User = u, RecentPassword = rp };

In this scenario, the let statement performs a database subquery.  Each active user is represents by a property in the anonymous class, and we don’t need any future query loading.

I recently created an MVP framework (http://nucleo.codeplex.com) that I was in the process of porting to a Silverlight 4 library for a project I’m working on.  As I was linking code to the project, I quickly realized that since the SIlverlight 4 is a subset of the .NET framework, I was running into issues with the custom code I was using, and this spurred off a philosophical debate in my mind: do I worry about creating my own custom components to handle the scenario, or do I refactor and leverage what’s already in the .NET framework?

I could very easily get caught into a whirlwind of wrappers, facades, and my own design of components to handle the situations I was dealing with.  I could also embed “if #SILVERLIGHT” everywhere to handle some of the scenarios I ran into.  On the other hand, if it’s already done for us in a way that requires some sacrifice of features or changes to the core code, is there value in this?  In reality, it depends.  Going to Silverlight, some components available in a web or windows app may not be in the Silverlight framework, forcing us to develop the feature we need anyway.  On some level, this is why developers create custom wrappers, facades, or adapters, because as environments change, certain framework code may or may not be present.  The converse can be true too: as environments change, our custom code could break, and using the framework would have prevented a refactoring.  In some ways, design can be a double-edged sword very quickly.  This of course depends on the type of application; a two-tier traditional web application may not need to be concerned, whereas an n-tier application may need to contemplate what it would take to share the middle layer to a variety of front-ends.

However, an environment change can consist of a change from waterfall to agile development, and as such TDD also becomes a factor in that design, an often reason for developers to approach the design with abstraction in our own code in the first place.  Those that come from an ASP.NET web forms background knows the ASP.NET framework is tightly-coupled with the web and as such is harder to test, unless you resort to wrappers of services, etc.  One of the core features I saught to implement in my framework was wrappers for the various services a developer may use; sure, HttpContextBase base class is nice, but  it tends to be tightly coupled with the web, and as such, a wrapper to expose one small portion of functionality comes in really handy here.

As you probably know, the Entity Framework 4 feature translates LINQ queries to SQL to execute in the backend.  However, what Entity Framework may render a SQL query that looks much different.  Entity Framework’s SQL query building process may involve the use of subqueries, nested joins, or any other construct.  

 

var ctx = new AdventureWorksObjectContext();

var results = from a in ctx.Addresses
     where a.StateProvince.StateProvinceCode == “PA”
     && a.CustomerAddresses.Count > 0
    select a;

This query, a LINQ query against the AdventureWorks database, queries a collection of Address objects.  Even though the signature of the query is IQueryable<Address>, the underlying constructs actually are an ObjectQuery<Address> instance.  The ObjectQuery is useful for many things; one of those is the ToTraceString method.  This method returns the underlying SQL query that was executed against the database.  It;s very easy to call, as shown below:

var query = results as ObjectQuery<Address>;
this.lblSQL.Text = query.ToTraceString();

 

The underlying SQL query appears below.  Notice how simple the LINQ query structure is setup above.  In the query above, the LINQ query makes it very easy to drill through two primary key references, from Address to the StateProvince and StateProvinceCode tables, as well as a CustomerAddresses subquery.  The query renders as the following:

 

SELECT [Project1].[AddressID] AS [AddressID], 

[Project1].[AddressLine1] AS [AddressLine1], 

[Project1].[AddressLine2] AS [AddressLine2], 

[Project1].[City] AS [City], 

[Project1].[StateProvinceID] AS [StateProvinceID], 

[Project1].[PostalCode] AS [PostalCode], 

[Project1].[rowguid] AS [rowguid], 

[Project1].[ModifiedDate] AS [ModifiedDate] 

FROM 

  SELECT [Extent1].[AddressID] AS [AddressID], 

  [Extent1].[AddressLine1] AS [AddressLine1], 

  [Extent1].[AddressLine2] AS [AddressLine2], 

  [Extent1].[City] AS [City], 

  [Extent1].[StateProvinceID] AS [StateProvinceID], 

  [Extent1].[PostalCode] AS [PostalCode], 

  [Extent1].[rowguid] AS [rowguid], 

  [Extent1].[ModifiedDate] AS [ModifiedDate], 

  [Extent2].[StateProvinceCode] AS [StateProvinceCode], 

  (

    SELECT COUNT(1) AS [A1] 

    FROM [Sales].[CustomerAddress] AS [Extent3] 

    WHERE [Extent1].[AddressID] = [Extent3].[AddressID]

  ) AS [C1] 

  FROM [Person].[Address] AS [Extent1] 

  INNER JOIN [Person].[StateProvince] AS [Extent2] 

  ON [Extent1].[StateProvinceID] = [Extent2].[StateProvinceID] 

) AS [Project1] 

WHERE (N’PA’ = [Project1].[StateProvinceCode]) 

AND ([Project1].[C1] > 0) 

It’s very handy to verify the underlying SQL, using either this construct, or a database profiling tool.

 

« Previous PageNext Page »