On All Things Web

Oct 09
I’ve recently written a series on using Knockout JS.  You can view the series using the following links:
Sep 08
I’ve been working with LimeJS, developing a sample game using this HTML 5/JavaScript framework.  LimeJS is a free framework that renders games in the DOM, using the HTML 5 canvas, or even using WebGL.  The game I was building is a letter-based game.  The game was a simple shell (without too much styling) and is still being developed.  You can read more about it from my series here:

This series is still being developed and will be added to over time.
Jul 10
I got this error strangely enough in one of my ASP.NET web forms projects.  I tried debugging through the application, but could never get Visual Studio to hit the actual error.  Some research leads to the following two posts that may offer some help:

Both of these fixes are valid; however, in my scenario, the problem was isolated to the page.  My page had a constructor, which checked information within the current request.  To access the request was done using:

public MyPage()

{

string var = this.Request.QueryString.Get(“x”);

}

This caused the exception to occur.  The reason is, even though Page.Request is a valid way to refer to the request, it’s happening too early and needs to be replaced with HttpContext.Current.Request.  In certain other scenarios, accessing page.Request happens too early or at the wrong time, and needs to be replaced by HttpContext.Current.Request.
Jul 04
In many social networking sites, pasting a URL from a news site into Facebook or Linked In displays a nice synopsis of that link.  The URL pasted into such status box is read, capturing information about the page and it’s specific contents, such as a title,description, thumbnail, etc.  One common technique used is to extract the Open Graph Protocol (http://ogp.me/) markup defined in the header, which is providing the additional metadata.  An open graph tag is a meta tag using property names prefixed with “og:”, each page can have its own specific metadata that describes the purpose of that page.

I had already created a site without open graph tags,so I began to think of a way to incorporate them with little effort.  After throwing out my more complex designs (which didn’t add much benefit except maybe a little more reusability), it turned out the simplest way was to use the following helper method.  The helper below extracts the information from the current page request, the ViewBag, and from other application constants.
  1. @helper OpenGraph()
  2. {
  3.     var bag = ((WebViewPage)WebPageContext.Current.Page).ViewBag;
  4.     <meta property=”og:title” content=”@Constants.Application.Name: @bag.Title” />
  5.     <meta property=”og:type” content=”website” />
  6.     <meta property=”og:url” content=”@WebPageContext.Current.Page.Context.Request.Url.ToString()” />
  7.     <meta property=”og:site_name” content=”@Constants.Application.Name” />
  8.     if (bag.ImageUrl != null)
  9.     {
  10.         <meta property=”og:image” content=”@WebPageContext.Current.Page.Href(bag.ImageUrl)” />
  11.     }
  12.     else
  13.     {
  14.         <meta property=”og:image” content=”@WebPageContext.Current.Page.Href(“~/Content/Images/logo.png”)” />        
  15.     }
  16.     if (bag.Description != null)
  17.     {
  18.         <meta property=”og:description” content=”@bag.Description” />
  19.     }
  20. }

The tags used are:
  • title – the title of the page
  • type – the type of content (which has pre-defined values, defined at http://ogp.me/).
  • url – the current URL of the page
  • site_name – the name of the site
  • image – the thumbnail image that represents the page
  • description – the description of the page

Each of my views then defines a small header code block, adding a few properties to the VIewBag.  By default, every view has a title property defined (which is the default behavior).  A few more properties can be added (Description and ImageUrl); the helper reads these from the ViewBag and renders to the markup.
  1. @{
  2.     ViewBag.Title = “Create a New Group”;
  3.     ViewBag.Description = “Use this feature to create a new group.”;
  4. }

Notice I omitted the image property, which will then use the site’s logo link.

To use this helper method, I made the OpenGraph helper a global helper (defined in the App_Code folder) and called this method between the <head></head> tags in the Layout page.  Then each page automatically incorporates the open graph tags automatically, based on the information fed in from the header code block, and I only have to call the method once.  Very simple.

 
Apr 21
WURFL is a service used to detect mobile devices.  Stated on their web site, WURFL “is a Device Description Repository (DDR), i.e. a software component which contains the descriptions of thousands of mobile devices. In its simplest incarnation, WURFL is an XML configuration file plus a set of programming APIs to access the data in real-time environments.”

WURFL is used by various products, but can also be used directly in your web site.  If you are using .NET, whether web forms or MVC, you can get instructions on how to set it up here: http://wurfl.sourceforge.net/dotNet/

I had trouble getting the base installation to work correctly; I was getting a random error when it attempted to build the manager based on the configuration.  It was probably something I wasn’t doing,  so to quickly get support up and running, I chose to write my own wrapper around instantating of the manager.  Note that in this example, hard-coding the file strings was OK for me, since this was inside a web site and I was doing the same task over and over again.  If this was a reusable solution, passing these parameters in is preferred.
public class WurflDeviceManager { public const String WurflManagerCacheKey = “__WurflManager”; public const String WurflDataFilePath = “~/App_Data/wurfl-latest.zip”; public const String WurflPatchFilePath = “~/App_Data/web_browsers_patch.xml”;private static IWURFLManager CreateManager() { var wurflDataFile = HttpContext.Current.Server.MapPath(WurflDataFilePath); var wurflPatchFile = HttpContext.Current.Server.MapPath(WurflPatchFilePath); var configurer = new InMemoryConfigurer() .MainFile(wurflDataFile) .PatchFile(wurflPatchFile);var manager = WURFLManagerBuilder.Build(configurer); HttpContext.Current.Cache[WurflManagerCacheKey] = manager; return manager; } public static IWURFLManager GetManager(HttpContextBase context) { var wurflManager = context.Cache[WurflManagerCacheKey] as IWURFLManager; if (wurflManager == null) { wurflManager = CreateManager(); } return wurflManager; } }

The approach I took was to provide a lazy load, checking from cache first if the object exists, then instantiating it and storing in cache if it doesn’t.  This way, the instance is always guaranteed to be present.
Feb 23
I was looking at the MVC ReCaptcha project available at http://mvcrecaptcha.codeplex.com/.  It’s a pretty simple to implement into your MVC project, and its effective  for implementing the core captcha capabilities quickly.  The framework wraps the ASP.NET web forms control that is available on Google code, for which Google has not created its own MVC implementation (to my knowledge).  MVC Recaptcha essentially wraps the ASP.NET version of the control and writes out what the control renders.  It also offers an action execution attribute that wraps the Recaptcha validator component.  It’s not difficult to implement at all (with full instructions on the web site) and MvcRecaptcha provides the source for you as well.  To use it requires adding a captcha control to a MVC or Razor view using:
<%= Html.GenerateCaptcha() %>

When the form posts back,  the captcha is validated and the result is pushed to the targeted action method through a  captchaValid property.  For instance, suppose you had an HttpPost action method, your action can check whether the captcha was valid using:
[ HttpPost, CaptchaValidator ] public ActionResult Save(MyObj obj, bool captchaValid) { if (!captchaValid) { ViewBag.Failed = true; return View(); } }

Add the keys that the google recaptcha site gives you to the config file, and you are done.  However, I personally did find I needed to make two manipulations to the process.  The below code discusses my modifications; please look at the MvcRecaptcha project code to understand the modifications I’m making.  First was to the CaptchaValidator attribute.  The first modification was to check if the form fields return null, and if they do, return an invalid status immediately.
var captchaChallengeValue = filterContext.HttpContext.Request.Form[ChallengeFieldKey]; var captchaResponseValue = filterContext.HttpContext.Request.Form[ResponseFieldKey]; //Begin Added Code if (captchaChallengeValue == null || captchaResponseValue == null) { filterContext.ActionParameters["captchaValid"] = false; } //End Added Code

By default, the validator attribute expects the challenge and response values to be present, supplied by the Recaptcha control.  However, there may be scenarios where it is not.  For instance, suppose you have a login form (which I have), and you want to only show the captcha after a number of unsuccessful logins.  The first X number of attempts won’t show the captcha, but after X number of invalid attempts, the captcha should appear.  This modification allows, for the first 3 attempts, to invalidate the response if the keys aren’t even present.

Additionally, the Recaptcha helper renders a string; in switching to Razor in my personal use, an encoded HTML string was the original output.  I changed the signature of the helper to:
public static IHtmlString GenerateCaptcha(this HtmlHelper helper)

The control output was the response of a StringWriter.ToString() call; wrapping this with MvcHtmlString as in:
return new MvcHtmlString(htmlWriter.InnerWriter.ToString());

Allows the content to render naturally.
Feb 16

Dynamic adds a lot of capabilities

Posted in .NET      Tagged ,       No Comments »
I’m really impressed by the dynamic keyword as a part of the .NET 4.0 framework.  Dynamic really opens up a lot of capabilities.  Dynamic will allow me, as I develop software, to avoid wrapper components and avoid having to use reflection in certain places.  The first scenario is where I had two similar, but distinct objects, in which I needed one common interface.  This is useful when the two objects have a similar API, but the API’s are disparate because of no common interface or base class.   I would use a class like the following:
public class MyWrapper : ICommonInterface
{
	private MyObject1 _o1;
	private MyObject2 _o2;

    public MyWrapper(MyObject1 o) { _o1 = o; }	

	public MyWrapper(MyObject2 o) { _o2 = o; }	

	//Method defined in interface, which both first and second object have.
	public void DoThis()
	{
		if (_o1 != null)
			_o1.DoThis();
		else
			_o2.DoThis();
	}
}

It was painful to do this at times, but both objects were closed for modification, and therefore I needed a wrapper with one common signature.  However, with dynamic, this can all go away and we can implement this more simply:
public class MyWrapper : ICommonInterface
{
	private dynamic _o;

    public MyWrapper(dynamic o) { _o = o; }
	
	//Method defined in interface, which both first and second object have.
	public void DoThis()
	{
		_o.DoThis();
	}
}

We may not actually need the wrapper, but if we do (in the case where we need the object to implement ICommonInterface and the objected passed in is an object closed for modification and is missing this interface), we can use the dynamic and assume it exposes the same interface.  Otherwise, if it doesn’t, you’ll find out at runtime :-)

Another reason to use dynamic is to avoid reflective calls, which can clean up some of the code and may add some performance benefits.  For instance, I could replace this code:
private object GetValue(object original)
{
	MethodInfo method = original.GetType().GetMethod("GetValue");
	return method.Invoke(original, new object[] { });
}

With this:
private object GetValue(object original)
{
	dynamic o = original;
	return o.GetValue();
}

That is, or course, a simplistic example; however, there are times where this actually does come in handy, assuming that that value passed in does actually have the GetValue method.  The benefit to the reflection approach is that I could actually check for the existence of the method by adding an “if (method != null)” above before trying to execute it.  I do not have that luxury with the dynamic keyword.

While dynamic provides benefits to your application architecture, there are times when good object oriented design can circumvent both the need for reflection and dynamic, as in the following:
private object GetValue(object original)
{
	if (original is IValueGetter)
		return ((IValueGetter)original).GetValue();
	else
		return null;
}

Here we assume that anytime we need to get a value, the object has an IValueGetter interface; if not, we return null because we don’t care about getting the value.  Again, a simplistic view, but you get the point.
Feb 10
I posted a while back about a way to embed in metadata about an entity’s tables and columns into the entity itself. This is useful if you disable the auto select and auto named parameters features of PetaPoco. However, I realized how bad what I did actually was. The approach I took, to embed that information into the entity, violated the Singular Responsibility Principal (SRP), which states that each object should have a singular responsibility, and doing this gave it two responsibilities. So instead, I decided to use an alternative solution.

For my needs, a repository pattern was helpful to centralize the location of all my queries. However, I didn’t want to worry about a grandiose repository scheme; I really needed something very simple, as the project this is for is very small. So I chose to embed the repository into the PetaPoco.Generator.include T4 template.

Below is my added repository, which contains those useful methods. Again, you may not need this, but if you want to get the most speed out of PetaPoco, it’s something to consider:

public partial class Repository
{
protected string[] GetColumns()
{
return new string[]
{
<#
for(int i = 0, len = tbl.Columns.Count; i
“”

};
}

protected Sql GetBaseSelectSql()
{
return new Sql()
.Select(GetColumns())
.From(“”);
}

protected string GetTableName()
{
return “”;
}

public Get(int id)
{
var ctx = new ();
return ctx.FirstOrDefault<>(this.GetBaseSelectSql().Where(“=@0″, id));
}

public void Create( obj)
{
if (obj == null)
throw new ArgumentNullException(“obj”);

var ctx = new ();
ctx.Insert(“”, “”, obj);
}

public void Delete( obj)
{
if (obj == null)
throw new ArgumentNullException(“obj”);

var ctx = new ();
ctx.Delete(“”, “”, obj);
}

public void DeleteByKey(int id)
{
var ctx = new ();
ctx.Execute(“delete from where = @0″, id);
}

public void Update( obj)
{
if (obj == null)
throw new ArgumentNullException(“obj”);

var ctx = new ();
ctx.Update(“”, “”, obj);
}
}

RepoName is the name to the core PetaPoco repository for working with data objects. Notice how in the modification methods (create, update, delete), for performance improvements, it provides the table/PK name, which the T4 template will inject and code generate. Additionally, some queries are generated too (Get and DeleteByKey), which we could take to a further degree. And lastly, at the top, are our helpful methods for referring to fields.

I placed this at the top of the generate poco’s IF statement, but before the entity definition as in the following. At the time of writing, this is at line 134.


<#



//POCO definition
[TableName("")]


[PrimaryKey("")]

[PrimaryKey("", sequenceName="")]



[PrimaryKey("", autoIncrement=false)]

[ExplicitColumns]
public partial class : .Record<>
{
Jan 17
We experienced an odd bug after upgrading from a 2010 version of the Telerik ASP.NET AJAX controls to the latest 2012 version (the most recently released hotfix). I kept getting an “Object reference not set to an instance of an object.” error in RadComboBox.OnInit event, created from the GridPagerItem of the RadGrid. Strangely enough, I really couldn’t get any insight into the problem because of no stacktrace. So I began my day-long investigation into the problem, starting by first downloading the source code, adding it to the solution temporarily (so I could get into debugging it), and ran the code again. This helped me to identify which grid was causing the problem, and look at some of the reasons why it may be so. This was beneficial because I did notice on that grid, the pager was invisible. Turning on the pager seemed to resolve the issue, however, this wasn’t the solution elsewhere.

I later on played with the application by adding and removing features from the grid, and eventually found out it was due to some code attached to the PreRender event on the grid. The code did this:

RadGrid_PreRender
{
Hide columns not supposed to be shown
Set the width of the grid
Rebind the grid
}

Unfortunately, this was the cause of the issue, and refactoring the code around solved the problem.
Dec 31
I’ve been experiencing with Micro ORM products on the market in the open source realm as of recent.  Two of the products a colleague of mine recommended were PetaPoco and Dapper.  In researching the two, I really liked some of the features of PetaPoco, and hence I decided to go with this product instead.  PetaPoco’s implementation is simple; download the package from GitHub or Nuget and install it on your machine.  PetaPoco comes in the form of 3 T4 templates (one master and 2 related templates) to generate the PetaPoco components (which you have full access to) and the data access components that map to your database.  Since these are T4 templates, you have control over the customization.

Because I wanted to optimize some of the speed performance improvements of PetaPoco, some of the convenience features are gone, such as automatically generating the select parameters, named parameters for queries and stored procedures, etc.  This means I’ll use the select option like the following:
new Sql() .Select(“*”) .From(“Users”) .Where(“IsActive = 1″)

However, instead of defining * as a convenience, I wanted to include all of the parameters names since * can perform slightly worse than named parameters; however, defining each column is harder to keep track of as table definitions change.  Therefore, the alternative I chose was to add some additional methods to the template, to handle this for me.

Before we get to that, let’s look at what PetaPoco requires.  In my project, there are three templates, the first is the master template named Database.tt.  It’s not critically important to understand what goes on in here except for the first section which has common settings:
// Settings ConnectionStringName = “DB”; Namespace = “My.DataAccess”; RepoName = “MyContext”; GenerateOperations = true; GeneratePocos = true; GenerateCommon = true; ClassPrefix = “”; ClassSuffix = “”; TrackModifiedColumns = true;

ConnectionStringName is important and must match a connection string defined within the app.config of the project.  Namespace determines the namespace o the components, and RepoName is the name of the custom database class generated.  PetaPoco uses an approach similar to LINQ to SQL or Entity Framework, where the core DataContext/ObjectContext classes are inherited from in the designer, creating a new, customized context class.

The additional options control whether to generate certain pieces of code (the poco component, common method operations, etc.), and how to modify the POCO’s being generated.  The shell of the custom database class appears below:
namespace My.DataAccess { public partial class MyContext : Database { public MyContext() : base(“DB”) { CommonConstruct(); } }

The next template we’ll look at is PetaPoco.Generator.include, which contains the definition of the POCO objects.  It’s this template that we can customize the process to add additional features, and make the process smooth.  At the top is the T4 template for generating the custom Database class as shown above.  Later on below is the template for each POCO, which looks like this:
<# if (GeneratePocos) { #> <# foreach(Table tbl in from t in tables where !t.Ignore select t) { #> [TableName("<#=tbl.Name#>")] <# if (tbl.PK!=null && tbl.PK.IsAutoIncrement) { #> <# if (tbl.SequenceName==null) { #> [PrimaryKey("<#=tbl.PK.Name#>")] <# } else { #> [PrimaryKey("<#=tbl.PK.Name#>", sequenceName="<#=tbl.SequenceName#>")] <# } #> <# } #> <# if (tbl.PK!=null && !tbl.PK.IsAutoIncrement) { #> [PrimaryKey("<#=tbl.PK.Name#>", autoIncrement=false)] <# } #> [ExplicitColumns] public partial class <#=tbl.ClassName#> <# if (GenerateOperations) { #>: <#=RepoName#>.Record<<#=tbl.ClassName#>> <# } #> {

PetaPoco can use attributes to identify the DB table name and primary key column name, if that option is enabled.   The partial class definition is where the meat of the generation options are.  This is where I added 3 additional items.
public static string GetPrimaryKeyName() { return “<#=tbl.PK.Name#>”; } public static string GetTableName() { return “<#= tbl.Name #>”; } public static string[] GetColumns() { return new string[] { <# for(int i = 0, len = tbl.Columns.Count; i < len; i++) { Column col = tbl.Columns[i]; if (!col.Ignore) { #> “<#= col.PropertyName #>”<#= (i != tbl.Columns.Count – 1 ? “,” : “”) #> <# } } #> }; }

The first method creates a static reference to the name of the primary key.  The second option creates a static reference to the name of the table, and the last option retrieves all of the names of the columns as an array.   The benefit to this option is it’s not reflective, it’s generated with the code gen and therefore compiled and not evaluated at runtime, making it still a pretty fast operation.  And I can update my SQL statement like the following:
new Sql() .Select(Users.GetColumnNames()) .From(Users.GetTableName()) .Where(“IsActive = 1″)

And, if I want to use the Save override that takes the table name and PK name, I can use this:
var db = new MyContext(); db. Save(users.GetTableName(), users.GetPrimaryKeyName(), poco);


To go even further, for selecting, we can generate the shell select statement as such:

public static Sql GetSelectSql() { return new Sql() .Select(GetColumns()) .From(GetTableName()); }


This makes it even easier to craft your select statements. You would have to use these options if you disabled certain features (like auto generation of select statements) of the database to improve performance.  I hope this helps illustrate how we can use code generation to help improve our applications.