Centering Content Properly

As an ASP.NET Developer, not a designer, it’s pretty easy to write crappy HTML. As I have come to learn, content designers will hate you for that. One that has always fooled me is how to properly center content. Centered content is nice sometimes. It put information right in front of the users face, exactly where they expect it. HTML has come a long way since 2000, and it’s worth looking at some of the right and wrong ways to do it. Your designers will love you for it.

The oldest, and simplest way to center content is to use a <center> tag. Designers hate this. It works differently between IE 6 & 7, and 7 will display it differently depending on the DOCTYPE. In quirks mode, it has an interesting effect. If you center a DIV tag in quirks in IE, the DIV will be centered, but the DIV’s contents will be left aligned. That’s actually the behavior we want, but unfortunately not a good option. It only works that way in IE, and your forced to use quirks. If you use standards mode, the center tag behaves correctly: everything is centered, even the content of the DIV, which is not what we are looking for.

To that extent though, you could use a center tag, and just left align the contents of the DIV. Something like this:

   1:  <html>
   2:      <body>
   3:          <p>Some other content here, maybe a header.</p>
   4:          <center>
   5:              <div style="width: 500px; border: solid 1px black; text-align:left;">I want this DIV centered, but the text still left aligned.</div>
   6:          </center>
   7:          <p>Some other content here, maybe a footer.</p>
   8:      </body>
   9:  </html>

Of course I would put my CSS in an external file, otherwise you’ll really catch hell from a designer. Needless to say, it gives the right effect:

snippet

But this really isn’t the best choice. Depending on what your designers have to work with, for say, a product, they may only have access to the style sheets. A center tag is not very flexible, and cannot be put into an external file like a CSS stylesheet.

So what other options are there? Tables would work, but that’s even worse than the center tag, and has the same problem as the center tag: It’s not CSS. So how do you do this using CSS? The trick is auto margins. It’s a fantastic part of the CSS layout that I find surprisingly, not enough developers know about. The HTML looks like this:

   1:  <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
   2:  <html>
   3:      <body>
   4:          <p>Some other content here, maybe a header.</p>
   5:          <div style="width:500px; margin: 0px auto; border: solid black 1px;">
   6:              test
   7:          </div>
   8:          <p>Some other content here, maybe a footer.</p>
   9:      </body>
  10:  </html>

Note that IE, even the famed standards compliant IE 8, will NOT render this correctly without the correct DOCTYPE. IE will leave it left aligned, but Gecko based browsers (Firefox) will still render it correctly.

So why is this the best solution (IMHO)? It’s completely controlled by CSS, and content designers love that.

What’s essentially happening here is the browser is automatically computing the margins for both the left and right side, giving it the appearance of being in the center.

Enum Friendly

Have you ever seen a huge switch on an enum just to return a string? Enumerations can be tricky to deal with, mostly from the aspect of displaying them back to a user. So I whipped up a little code to make it easier to display enumerations, a little extension method. Consider this enumeration:

public enum StatusEnumeration
{
    Normal,
    NotRunning
}

If I want to display this back to a user in English, I can switch on the value, and return “Not Running” when the value is NotRunning. This can be a tad tedious, so I created an attribute to decorate an enumeration. I call it the RenderAsAttribute:

[AttributeUsage(AttributeTargets.Field, AllowMultiple = false, Inherited = false)]
public sealed class RenderAsAttribute : Attribute
{
    private readonly string _renderAs;

    public RenderAsAttribute(string renderAs)
    {
        _renderAs = renderAs;
    }

    public string RenderAs
    {
        get { return _renderAs; }
    }
}

All you do is put this on the enumeration’s values, like so:

public enum StatusEnumeration
{
    Normal,
    [RenderAs("Not Running")]
    NotRunning
}

Notice I put the “RenderAs” on NotRunning. Now for the cool part, a static extension method to make this attribute useful. It would be used something like this:

string notRunningAsString = StatusEnumeration.NotRunning.Render();

Notice how I just call “Render()” on a the value of the enumeration.

public static class EnumRenderer
{
    public static string Render(this Enum value)
    {
        FieldInfo[] enumerationValues = value.GetType().GetFields(BindingFlags.Static | BindingFlags.Public);
        foreach (var fieldInfo in enumerationValues)
        {
            if (value.CompareTo(fieldInfo.GetValue(null)) == 0)
            {
                var attributes = fieldInfo.GetCustomAttributes(typeof(RenderAsAttribute), false).Cast<RenderAsAttribute>();
                if (attributes.Count() == 1)
                    return attributes.Single().RenderAs;
            }
        }
        return value.ToString();
    }
}

This uses refection to look up the value on the enumeration. It’s a bit slow as far as code performance is concerned, but you can easily add caching to it by Type and value. Or, instead of using hard coded strings, you can use the string as a resource key and look up the string in a satellite assembly.

MVP 2009 and other stuff

Recently for April 1st I was re-awarded as an MVP for 2009, and I’m happy to be able to keep doing what I do for the community.

So where has my blog been lately? Well I’ve been focusing on other things at the moment, but I am still blogging. I recently wrote a blog post for my company’s blog about refactoring, and getting your feet wet with it. We’ll see where the series goes too. You can read my blog post about refactoring here:

http://www.thycotic.com/refactoring-code-a-programmers-challenge

TDD Training Course

Ever want to learn more about Test Driven Development? This coming weekend on Saturday, January 10th Thycotic Software will be hosting a free one day TDD Training Course. This course is ideal for novice TDD developers or people that are just interested in getting into it. Some areas that will be covered are an introduction to TDD development, the philosophy of TDD development, and a brief introduction to Mock Testing. Later in the near future a full training session will be available, and with any luck if this one turns out well we’ll keep doing it!

It’s a bit of a plug, but the offer is so great who wouldn’t want to hear about it?

Managing IIS Pipeline Mode for Backward Compatibility

pipes Ever since the induction of IIS 7, there came a cool new feature for developers to leverage called Integrated Pipeline. This feature in a short explanation closely couples ASP.NET and IIS more closely. It allows allows writing of IIS Modules in managed code, how neat is that? This also slightly changes the behavior of ASP.NET, such as introducing new events on the HttpApplication (Global.asax) and the event cycle. A classic example is where in Integrated mode you cannot access the current request using HttpContext.Current.Request during theApplication_Start event, which makes sense.

pipeline Now for obvious reasons this can break existing applications, and cannot be used for applications that will be installed in mixed environments, such as Server 2003 and Server 2008. Fortunately, IIS 7 make it easy to switch it back back to its old behavior using the Classic Pipeline by configuring the IIS AppPool. Most commonly, if you make a web-based product then your application will usually require Classic Pipeline to stay backwards compatible.

Even if you have a detailed instruction manual, this can be an easy step to miss – and results in some icky error messages. Wouldn’t it be nice if you could display an nice error message to your customers that their pipeline is configured improperly?

Well fortunately there is a way to do that. Ironically, we will be using part of the Integrated Pipeline to get that accomplished with the help of an HttpModule.

What we will be doing is writing an HttpModule that will basically stop the current request, write out a friendly message that links to a Knowledge Base article, and only do this if our application is running in Integrated Pipeline. The module to get this done is pretty simple, we would write it like any other HttpModule. Here is the code for mine:

using System;
using System.Web;

namespace MyWebApplication
{
    public sealed class IisPiplineCheckModule : IHttpModule
    {
        public void Init(HttpApplication context)
        {
            context.BeginRequest += context_BeginRequest;
        }

        private void context_BeginRequest(object sender, EventArgs e)
        {
            HttpContext.Current.Response.ClearContent();
            HttpContext.Current.Response.WriteFile(VirtualPathUtility.ToAbsolute("~/pipeline.htm"));
            HttpContext.Current.Response.End();
        }

        public void Dispose()
        {
        }
    }
}

 

As you can see, there is nothing particularly difficult. It just hooks into the BeginRequest event of the Application, and writes a file called pipeline.htm to the response stream and closes the response. Pretty simple, right?

Well the final trick is getting this to work only in Integrated Pipeline. Unusually, there is no easy way to do this with the .NET Framework (that I was able to find). However we will leverage the web.config to accomplish this.

For IIS 7, there is a new section in the web.config file called <system.webServer>. This is one of the sections that IIS 7 pays attention to. What we will do is introduce our module into IIS 7’s Integrated Pipeline, but not into the old <httpModules> that has been existing for a long time. Let’s look at our new section and examine it.

<system.webServer>
    <validation validateIntegratedModeConfiguration="false" />
    <modules>
        <add preCondition="integratedMode" name="IisPipelineCheckModule" type="MyWebApplication.IisPiplineCheckModule,MyWebApplication" />
    </modules>
</system.webServer>

 

There are two interesting points here. The first is actually adding out module. Notice the preCondition. This is telling IIS 7 that our module should only be running in Integrated Pipeline by specifying the integratedMode flag. For the Type we just set the full namespace and class to our HttpModule and the assembly it is located in.

The second interesting point is that if you have standard HttpModules registered in the configuration/system.web/httpModules section IIS 7 will point out that there are modules in there that are not in the system.webServer configuration. We can, if we want, disable that through the validation element.

And ta-da, we now have a module that writes out a file, but only for Integrated Pipeline. You can customize the pipeline.htm file as much as you’d like. As far as I can tell, this is the only easy way to determine if your application is in Integrated or Classic Pipeline. The alternative is to WMI query the IIS metabase, which is a little more high risk and more complex.

A Step Too Far

straight-jacket

Occasionally, I have a bit of a compulsive behavior. When presented with a challenge, I usually won’t give up until I have a working answer… and sometimes that answer get’s a little crazy. Here’s one of my more recent journeys down that path.

I was asked, “Hey Kevin. I am a method that accepts a type parameter, ‘T’. Is there a type constraint I can add to T so that I may sum them?”

As an example, consider this code: 

public string FormatNumber<T>(T t1, T t2) where T:IFormattable
{
    T sum = t1 + t2;
    return String.Format("{0:N2}", sum);
}

This sample is not very useful by itself, and it just an example.

The problem is, t1 and t2 cannot be added because the compiler cannot guarantee that t1 and t2 can even be summed. Unfortunately, generics do not have a way to add a constraint for operators yet.

The eventual solution was that the code needed some rethinking. The need for generics and adding them was legitimate, but not worth going over. Either way, I started wondering how this could be possible without doing something like this:

public T Add<T>(T t1, T t2)
{
    if (t1 is int)
        return (T) (object)(((int) ((object) t1)) + ((int) ((object) t2)));
    if (t1 is long)
        return (T) (object)(((long) ((object) t1)) + ((long) ((object) t2)));
    if (t1 is float)
            return (T) (object)(((float) ((object) t1)) + ((float) ((object) t2)));
    throw new NotImplementedException("Can't do addition.");
}

That just seems unattractive to me. So I starting thinking… and thinking… and it spiraled down from there. The solution that I came up with is very unattractive, slow, and nuts.

My solution? Dynamically generate an assembly using Reflection.Emit. It works, but not very elegant. Here it is:

public static T Add<T>(T t1, T t2)
{
    if (!typeof(T).IsPrimitive)
    {
        throw new Exception("Type is not primitive.");
    }
    ModuleBuilder moduleBuilder = assemblyBuilder.DefineDynamicModule("MainModule");
    TypeBuilder typeProxy = moduleBuilder.DefineType("AdditionType", TypeAttributes.Class | TypeAttributes.Public | TypeAttributes.Serializable);
    MethodBuilder methodBuilder = typeProxy.DefineMethod("SumGenerics", MethodAttributes.Static, typeof(T), new[] { typeof(T), typeof(T) });
    ILGenerator generator = methodBuilder.GetILGenerator();
    generator.Emit(OpCodes.Ldarg_S, 0);
    generator.Emit(OpCodes.Ldarg_S, 1);
    generator.Emit(OpCodes.Add);
    generator.Emit(OpCodes.Ret);
    Type adder = typeProxy.CreateType();
    MethodInfo mi = adder.GetMethods(BindingFlags.Static | BindingFlags.NonPublic)[0];
    return (T)mi.Invoke(null, BindingFlags.Default, null, new object[] {t1, t2}, null);
}

Not to mention, that you have to tell your AppDomain how to resolve the type, and that involves a little magic with the OnAssemblyResolve of the AppDomain:

private static Assembly OnAssemblyResolve(object sender, ResolveEventArgs args)
{
    return args.Name == assemblyBuilder.FullName ? assemblyBuilder : null;
}

Sometimes, it just feels good to write crazy code and get it out of your system, before you really do start writing code…

XP SP3 and Internet Explorer

ielogo Perhaps it’s just me, or maybe I missed something. I was a little excited about Service Pack 3, particularly the Network Level Authentication feature made it into Remote Desktop, which means that I can force the NLA requirement for Windows Server 2008 and Terminal Service Gateways. Also the added support for WPA2 is a huge security advantage for people with wireless networks. I was happy to be able to change my wireless router to use WPA2 and still be able to support my machine that is still running XP, and this is also something Network Administrators look for.

Not to mention, it was pretty painless to install.

However, I also had Internet Explorer 8 Beta 1 installed. Beta 1 was interesting, but due to some stabilities issues, I thought it was time to remove it and hope for the best in Beta 2. However, when looking at Add / Remove Programs, I saw that I could not remove it. The Uninstall button was missing. I did a little bit of digging, but I didn’t find anything online that caught my attention.

At Tech Ed 2008 though, I did talk to Jane Maliouta on the IE 8 team, and she explained to me that Service Pack 3 caused IE 8 to be un-installable. The only known work around is to remove Service Pack 3, then remove Beta 1, then put Service Pack 3 on.

Please read Jane’s blog entry on how else XP SP3 effects Internet Explorer. Service Pack 3 also prevents you from un-installing Internet Explorer 7 as well.

http://blogs.msdn.com/ie/archive/2008/05/05/ie-and-xpsp3.aspx

Tech·Ed 2008

florida Tech Ed 2008 is the first Tech Ed to be split into two separate conventions, one for IT Professionals and one for Developers. For 2008, I will be attending the 2008 IT Professional conference from June 10th – 13th in Orlando, Florida. I’ll be on the convention floor with my company, Thycotic Software, showing off our flagship product, Secret Server. If you’re going to be there, be sure to come check our booth out!

DocBook

Let’s take a break from the text encoding idea real quick so I can talk about a new tool that I recently got into..

One of the things that every product needs, regardless of how simple it is to use, is good documentation. It’s not fun, it takes time, and isn’t technically intriguing. Regardless, it has to be done. The part that myself and team members have struggled with is a tool take makes it easy. We looked at a few commercial applications such as RoboHelp, but it always left me the impression we were rabbit hunting with a Barrett M107 .50 rifle. Our requirements were pretty simple:

  • Easy to use
  • Text based – This makes differentials and merging easy
  • Reasonably priced
  • Able to produce different types of documents (HTML, PDF, etc)

penandpaper We finally settled on what is the best solution (not to mention, it’s open source and free) called DocBook. It’s based on XML, and does have a standard. XML is extremely flexible, and their output is generated by XSL transformations, so we can easily customize the output to meet our requirements. We started using the e-novative DocBook Environment, which gives you a simple command line environment for compiling your DocBook books. It too uses a GPL license, so you can customize it to your needs.

A simple book looks something like this:

<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE book
  PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN" "file:/c:/docbook/dtd/docbookx.dtd"
  [
    <!ENTITY % global.entities SYSTEM "file:/c:/docbook/include/global.xml">
    %global.entities;

    <!ENTITY % entities SYSTEM "entities.xml">
    %entities;
]
>
<book lang="en">
    <bookinfo>
        <title>My First Book</title>
        <pubdate>May, 2008</pubdate>
        <copyright>
            <year>2008</year>
            <holder>Kevin Jones</holder>
        </copyright>
    </bookinfo>
    <chapter>
    <title>My First Chapter</title>
    <sect1>
        <title>How to crash a .NET Application</title>
        <para>Call System.AppDomain.Unload(System.AppDomain.CurrentDomain)</para>
    </sect1>
    </chapter>
</book>

Pretty simple, right? Each book can be broken down into separate chapters, which are broken down into sections, then paragraphs. It takes care of some dirty work for you, such as maintaining a Table of Contents for you. It offers a lot of other standard features as well, embedding graphics, referencing other places in the document.

Since DocBook is capable of understanding external entities, I can place chapters, sections, any part of the document that I want into another file and create an <!ENTITY … > for it.

Compiling it is pretty easy. From the e-novative environment, just use the docbook_html.bat for docbook_pdf.bat to create your generated output, something like this:

>C:\docbook\bat\docbook_html.bat MyFirstBook

MyFirstBook is the name of the project in the projects folder, which is all automatically created for you by the docbook_create.bat script. Using the compiler, the out-of-box HTML template looks like this:

docbookhtmloutput
(click for full image)

There you have it, a simple documentation tool. Not very pretty at the moment, but of course it’s easy enough to theme it to your company or product by changing the XSL.

Text Encoding (Part-1)

I was recently on the ASP.NET Forums and a member was asking, “How can I figure out the encoding of text?” and that got me thinking. There should be a reasonable way to do this, right? It’s a useful thing to know. First, we need a little background on how text is encoded into bytes.

Long ago, back when 64K of memory was a big deal, characters took up a single byte. A byte ranges from 0 – 255, which allows us to support a total of 256 characters. Seems like plenty, no? English has 26, 52 for both cases, 62 with numbers, 92 with punctuation, and a few extra for line breaks, carriage returns, and tabs. So about 100, give or take a few. So what’s the problem?

Well, this worked great and all, but other languages use different characters. The Cyrillic language by itself supports 33 letters. This is where encoding was introduced. In order to support multiple character sets, what each byte meant was determined by its encoding. This was done simply by knowing what encoding was used.

textIn today’s world, where that average calculator has more memory than PCs did long ago, we now also use 2 byte encoding. That means that we can support 255 to the second power of characters, or 65,536. That is enough to support all languages in a single encoding, even though it takes up double the space. Problem solved, right? Not exactly.

While in this day and age we support double byte encoding, there are still other factors involved, such as the endianness (the order of the bytes. Big endian is backwards). Even then, there is still a lot of legacy data to support that is still single byte.

Say I give you a big binary chunk of data, and I tell you to convert it to text. How do you know which encoding is used? How do you even know which language it is in? I could be giving you a chunk of data using IBM-Latin. So how do we figure this out? Some smarts and process of elimination. Let’s start with things we know.

All of the non single-byte encodings have what’s called a Byte Order Mark, or BOM for short. This is a small amount of binary data pre-appended to the rest of the data that identifies which encoding it is. In .NET world, this is called the Preamble. Since the BOM is an ISO standard, it is always the same for the encoding used regardless if you are using .NET, Python, Ruby on Rails, etc. We can look at our data and see if the BOM can tell us.

To achieve this in .NET, we will be using most of the classes in the System.Text namespace. Specifically, the Encoding class. An instance of the encoding class has a method called GetPreamble(). Which will give us our BOM for that encoding. A BOM can be from 2 – 4 bytes, depending on the number of bytes used in the encoding. Remember when I said two bytes would be plenty? Well I fibbed, since there is an encoding called UTF-32 that supports 4 bytes (a whopping 4.2 billion character support).

We can then check our data to see if it starts with the BOM.

private static bool DataStartsWithBom(byte[] data, byte[] bom)
{
bool success = data.Length >= bom.Length && bom.Length > 0;
for (int j = 0; success && j < bom.Length; j++)
{
success = data[j] == bom[j];
}
return success;
}

So lets look at this method. This method takes our data, and a BOM, and determines if the data starts with the BOM. There are a few assumptions:

  1. The data length is always greater than or equal to the BOM. If it is not, then there is no BOM at all, and we’ll cover that in a bit.
  2. The BOM’s length is always greater than zero.

So let’s put it to use (assume the local data is a byte[]):

foreach (EncodingInfo encodingInfo in Encoding.GetEncodings())
{
Encoding encoding = encodingInfo.GetEncoding();
byte[] bom = encoding.GetPreamble();
if (DataStartsWithBom(data, bom))
return encoding;
}

Here, we get all of the encodings that .NET knows of, and looks to see if our data byte array starts with that encodings BOM. If the encoding has no BOM, the DataStartsWithBom method will handle that with the bom.Length > 0 on the 3rd line. Once we know the encoding, we can decode it. You have to ensure that you don’t actually try to decode the BOM itself:

encoding.GetString(data, bom.Length, data.Length – bom.Length);

Pretty straight forward so far, right?

Yes? OK let’s move on. What about the case where we can’t figure it out by the BOM? Most encodings don’t have a BOM, only the UTF encodings do. ISO and OEM encodings, do not.

This is where it gets tricky, and where some pretty complex algorithms can come into play. The most important piece of information that you can have at this point, is knowing which language the text is in. With that, we can take a reasonable stab at which encoding is it.

.NET supports languages through the System.Globalization.CultureInfo class. This class will be very useful from here on forward. Let’s take baby steps on attacking this problem, and while we don’t know everything, we can use clues.

Each language has what’s called an ANSI encoding. This a standard encoding used for that language assigned by the American National Standards Institute. The ANSI encoding is always a single byte encoding. This seems like a reasonable place to start.

We can get this Encoding by calling cultureInfoInstance.TextInfo.ANSICodePage. This only gives us the numeric code page (an identifier), but it’s simple enough to create an instance of the Encoding class with the code page by calling Encoding.GetEncoding(int codePage).

How do I figure out the language? Chances are you know what language your users are using, or at least most of them. A case where you wouldn’t know is screen-scraping. That can be figured out by looking at the encoding of the response. You can do that by looking at the ContentEncoding property off of the HttpResponse instance.

In most cases, this will probably work. By no means am I saying, “this will always work” in fact, there are a lot of bases that I haven’t covered that I hope to in future blog posts. There are other code bits out there that do this already, and do a good job, but it’s always good to know how it actually works, and fully understand the problem you are trying to solve.

So what’ll be in part 2? How to decode text without knowing the language, and maybe in part two (part 3?) lossy decoding.