Web

PathInfo And ASP.NET Themes: Why Can’t They Work Together?

If you ever worked with ASP.NET Themes and Skins, you know that stylesheet links are added to the head section of the HTML document.

The rendered URL to these stylesheets is always relative to location of the page being requested.

So, for a request to:

http://MySite/Section/Default.aspx

you'll get:

<link href="../App_Themes/Default/Styles.css" type="text/css" rel="stylesheet" />

which will make the web browser request for:

http://MySite/App_Themes/Default/Styles.css

and it all works fine.

Well, it works fine until you need to navigate to:

http://MySite/Section/Default.aspx/PathInfo

You'll get the same stylesheet reference and the browser will request for:

http://MySite/Section/Default.aspx/App_Themes/Default/Styles.css

This happens because the web browser has no knowledge of what PathInfos are. It only accounts for the number of forward slashes (/).

I've filed a bug on Microsoft Connect about this.

Make The HttpValueCollection Class Public And Move It To System.DLL

I find the System.Web.HttpValueCollection class very useful in a wide number of situations that involve composing HTTP requests or any other need to represent name/value collection as a string (in an XML attribute, for example).

As of now (.NET Framework 3.5 SP1 Beta), the only way to create an instance of the System.Web.HttpValueCollection class is using the System.Web.HttpUtility.ParseQueryString method.

I’d like to see it public and available in a more generic assembly like System.DLL to be available on every type of .NET application (Windows Client applications, Windows Service applications, Silverlight applications, Windows Mobile applications, etc.).

If you agree with me, vote on my suggestion on Microsoft Connect.

BEWARE: System.Web.HttpValueCollection Parsing Is Not Reversible

If you run this code:

System.Collections.Specialized.NameValueCollection queryString = System.Web.HttpUtility.ParseQueryString("noKey&=emptyKey&A=Akey");

queryString will actually have the running type of System.Web.HttpValueCollection.

What's great about this class is that its ToString method output is the collection's content in a nice URL encoded format.

As with its base class (NameValueCollection), there’s a difference between a null string and an empty string key and the parsing treats query string parameters with no parameter specification as having a null string key and parameters with an empty string key having an empty string parameter key.

So, when call ToString on the instance returned by System.Web.HttpUtility.ParseQueryString method you would expect to get the parsed string (or, at least, one that would be parsed into the equivalent collection), right? But what you’ll get instead is this: noKey&emptyKey&A=Akey.

I’ve filed a bug into connect. If you think this is important and must be corrected, please vote.

More On ASP.NET Validators And Validation Summary Rendering of Properties

On previous posts [^][^] I mentioned the size of ASP.NET validators and validation summary rendering and the fact that expando attributes are being used to add properties. Mohamed also mentions this issue.


Besides the fact that custom attributes aren't XHTML conformant, Firefox differs from Internet Explorer in the way it handles these attributes.


On Internet Explorer, these attributes are converted in string properties of the HTML element. On Firefox, on the other hand, these attributes are only accessible through the attributes collection.


I wonder why I don’t like client-side JavaScript development.

Testing With Multiple Versions Of Internet Explorer

On a previous post I mentioned IETester.

Jorge Moura mentioned TredoSoft’s MultipleIEs and a list of web browsers.

DebugBar, Companion.JS And IETester


Some days ago a colleague of mine pointed out to me this tool (IETester) that allows testing the different rendering and JavaScript engines of Internet Explorer (5.5, 6, 7 and 8beta1) side by side with the installed version.


I haven’t tested IETester yet, but I found two other tools in the site that caught my attention: DebugBar and Companion.JS.


DebugBar is like other tools I use [^] with a few differences. DebugBar is an explorer bar (and docks on the left side of IE) and can’t be undocked but has a JavaScript console and tracks only the HTTP/HTTPS traffic that belongs to the visible web browser tab (IE 7). DebugBar can also spy on other instances of IE like Document Explorer or FeedDemon.


Companion.JS is a Firebug-like tool and was the one that liked the most because it gave me something that I hadn’t: something that kills those annoying scripting error dialogs.


Both DebugBar and Companion.JS claim to be JavaScript debuggers but I couldn’t find any way for setting breakpoints or running scripts step by step. Probably because I have Visual Studio installed on this machine.

Internet Explorer vs. FireFox

Until recent I had never used FireFox (FF) because Internet Explorer (IE) was good enough for me.

I don't do much web page development and because I own licenses for Visual Studio (VS), HTTPWatch and IEWatch (tried the Internet Explorer Developer Toolbar but it keeps bowing up and killing IE and I've seen Nikhil Kothari's Web Development Helper installed and doesn't work well when non US English characters are displayed) I never needed anything else.

Over the years I've seen all the campaigning against IE and promoting FF as a better, more standards compliant, more secure and what else.

A few days back I had to do some work with ASP.NET validation summary and validators and needed to check if it worked on FF.

Talk about disappointment:

  • FireBug is by far no better than the tools I've been using.
  • FF needs its own proxy configurations - For me, any application running on Windows that needs its own proxy settings it's just a badly developed application.
  • (I'm sure I'd find much more if I used it.)

IE isn't a good developer tool yet (not even in IE8 at this time [^]) and it should have been for a long time. Or, at least, VS should have better support HTML and CSS debugging.

But, on the other hand, Windows Internet Explorer is just another application built on top of the Web Browser Control[^] (which is part of the IE installation, but can be used by itself). You can build any Windows application that uses a Web Browser Control (I've built more than one). Looks like the same is not as simple with FF [^].

I don't intend to start a web browser war. I just wanted to state my disappointment. I guess FF fans set my expectation too high.

Rendering ASP.NET Validators And Validation Summary Property As HTML Attributes

Yesterday I blogged about the cause of ASP.NET validators and validation summary slowness.


At that point I wasn't aware of the existence of the XHTML conformance configuration (thanks Nuno).


With the XHTML conformance configuration set to Legacy, the rendering of controls works like it worked in ASP.NET 1.1.

The Cause Of ASP.NET Validators And Validation Summary Slowness

When building ASP.NET pages, if you use too many validators and validation summaries your pages can become very slow. Have you ever wondered why?


Lets build a simple page web page with a few validators. Something like this:


Web page with validation


The page is composed of:



ASP.NET renders the ValidationSummary as a DIV and each validator as a SPAN and uses expando attributes to add properties to those elements.


According to the documentation, expando attributes are set dynamically from JavaScript to preserve XHTML compatibility for the rendered control's markup.


The problem is that all that JavaScript makes the HTML document larger and slower to execute than if the properties were rendered in HTML as attributes of the elements.


For such a small page, the difference in size approaches 2k bytes. If you add a few dozen validators to he page, the slowness is noticeable.


I'm all in favor of strict standards and standards compliance, but in this case, I wish XHTML would allow arbitrary attributes.

Is it possible to compress a HTTP request?

Recently I was asked (related to my articles [^] [^] about HTTP compression) if it was possible to compress the contents of a web service call.

The way HTTP compression works (as far as I know) is by the client announcing to the server (using the accept-encondig request HTTP header) what compression methods is capable of handling.

If the server is capable of using one of the accepted compression methods, compresses the response and specifies (using the content-encoding HTTP response header) the compression method used.

The client usually doesn't know if the server accepts any kind of encoding, so it shouldn't impose any compression to the server.

One way to allow request compression and having the server handling it would be to send a content-encoding HTTP header specifying the compression method of the request and having the server handling it and the BeginRequest event by setting a HttpResponse.Filter capable of uncompressing the request. This way it would be transparent to the request handling.

NOTE: I didn't test this.