Blog moved…
This time, to the official wordpress site. You can follow it here or (if you’re following it by RSS feed, no need to change it if you’re using my feedburner feed).
Ramblings about C#, .NET and Programming
This time, to the official wordpress site. You can follow it here or (if you’re following it by RSS feed, no need to change it if you’re using my feedburner feed).
Conversions…such a boring topic and I’m positive that you already know everything there is to know about numeric conversions in C#. That’s all good, but I think there might be some small details that might bite you if you’re not careful. Ok, let’s start with the basics, shall we?
Everyone knows that whenever you need to perform an arithmetic calculation that involves mixing numeric types, C# will pick the one that has the largest range and it will promote all the other types with a narrower range to that type before performing the calculation. Here’s an example:
As you can see, C# will convert a into a double before performing the division because 4.0 is a double and doubles have a “wider range” than an integer. In other words, a double can represent any value that int can. In other words, this conversion is performed implicitly because it’s considered a promotion (since the target type has a wider range than the original type, then there’s no loss in the conversion from int to double). The same does not happen when you try to perform a conversion in the opposite direction (aka as narrowing conversion):
It’s still possible to get a narrowing conversion, but you need to be explicit about it and use a cast:
Whenever you use a cast, you’re saying something like “hey, I really want to perform this cast and I can live with the eventual loss of data”. And now the compiler is able to convert it because you said “hey, it’s ok. Just go ahead and convert it”. Now, what happens when you combine types that aren’t more expressive than the other? For instance, what happens when you mix int, uint and floats in an expression?
Unlike the double example, all of these types are 32 bits in size, so none of them can hold more than 2^32 distinct values, right? They do, however, have different ranges. For instance, 30000000001 in a uint is simply too large to be put in a int (and it can only be approximated in a float). What about –1? Yes, you can put it in an int, but since it’s a negative number, you can’t really put it in a uint. So that the “float lover” isn’t upset, it’s also true that there are very large numbers that float can represent which are out of range for int and uint.
C# will allow some implicit conversions in these scenarios where there is the potential to loose precision. Since C# cares about range (and not precision), it will allow implicit conversions whenever the target type has a wider range than the source type. In practice, this means that you can convert implicitly from int and uint to float. Although float is unable to represent some values exactly, there are no int or uint values it cannot represent (at least, approximately). Unfortunately, this also means that there’s no implicit conversion from float to int or uint. Here’s an example that tries to illustrate the point of lost precision:
Running the previous snippet ends up printing the following:
3000000001 – 3000000000
Press any key to continue . . .
As you can see, I’ve forced the conversion from the float to an uint and we ended up getting a different number. Before ending, it’s also interesting to understand what happens when we try to perform a narrowing conversion to an int when we have an out of range number. Well, it all depends on the type of the value being casted. When we’re talking about integer casts, the spec does a good job of specifying what should happen. If the types are of different sizes, then the binary will be truncated or padded with zeros so that it ends up with the right size for the target type. Here’s an example of what I mean:
Sometimes, this is useful, but it can also end up giving some surprising results. And this leads us into checked and unchecked operations. However, since this is a really big post, we’ll leave it for a future post.
[This is an opinion article about taxes in Portugal for 2013 and that’s why it’s entirely written in Portuguese. If you’re not living in Portugal, then it’s safe to carry on reading other stuff]
Durante este fim de semana, tive a oportunidade de me aperceber que ainda há pessoas que acreditam no pai natal (ou então, e se assim o preferirem, que ainda acreditam no coelho da páscoa). Só assim se percebe como ainda existem pessoas que pensam que 1.) vão *apenas* receber um pouco menos de rendimento líquido no próximo ano e que 2.) os funcionários públicos não serão tão prejudicados como os restantes trabalhadores, já que terão o “benefício” de receberem o 13º mês em 2013.
Neste post, o meu objetivo não é dissertar acerca do trabalho desses funcionários ou sobre a necessidade da sua existência (se alguém estiver interessado em começar essa discussão, é só dizer), mas sim mostrar como o aumento de impostos no próximo ano é brutal e vai atingir todas as famílias cujos rendimentos provêm do trabalho por conta de outrem.
Para não complicar as contas, vamos supor que estamos perante um casal sem filhos, em que ambos trabalham para a função pública e auferem mensalmente 1500 euros cada (este valor permite-nos ignorar a redução proveniente da sobretaxa variável entre 3.5% e 10% aplicada a todos os trabalhadores do público que auferem mais do que 1500 euros mensais). Na prática, isto quer dizer que estamos perante um rendimento bruto mensal de 3000 euros. Antes de avançarmos, devo alertar para o facto de não ser um expert na matéria e dos cálculos apresentados serem feitos com base no que eu li e vi em vários artigos. Vejamos, então os cálculos de pagamento de IRS para 2012 deste casal:
Rendimento bruto anual (12 meses x 3000) |
36000 |
Coeficiente conjugal (/2) |
18000 |
Taxa média 12,348% x 7410 |
~915 |
Taxa normal 24,5% x (18000-7410) |
~2595 |
Caixa geral de aposentações (11%) |
1980 |
ADSE (1,5%) |
270 |
Total rendimento líquido |
2 x ( 18000-915-2595-1980-270) = 24480 |
Antes de avançar, é importante notar que os cálculos apresentados são anuais (com base nesta tabela). Não me venham falar de tabelas mensais porque essas são sempre aproximadas e tentam “advinhar” o que o contribuinte deve pagar mensalmente. Ao usarmos as tabelas anuais, temos a certeza de termos os valores finais “corretos”. Neste exemplo, também não entrei em linha de conta com eventuais deduções à coleta relativas a despesas de saúde, etc.
A forma de cálculo é relativamente simples: procuramos o intervalo onde se inclui o valor bruto e obtemos o limite superior do escalão imediatamente anterior. Esse valor (limite superior da linha anterior) é multiplicado pela taxa média dessa linha anterior. Em seguida, efetuamos a diferença entre o valor bruto anual e esse limite superior do intervalo usado no primeiro cálculo. Esse novo valor é multiplicado pela taxa normal do escalão onde o rendimento bruto foi incluído.
Portanto, do valor bruto de 36000, o casal de funcionários públicos termina o ano de 2012 com 68% desse rendimento (isto, claro, depois de efetuar todos os descontos obrigatórios por lei e que incluem IRS, CGA e ADSE). Este cenário, na minha opinião, já é mau.
Agora, vejamos o que reserva o ano de 2013 para este casal:
Rendimento bruto anual (13 x 3000) |
39000 |
Coeficiente conjugal (/2) |
19500 |
Taxa média 14,5% x 7000 |
~1015 |
Taxa normal 28,5% x (19500-7000) |
~3563 |
Taxa extra 4% |
760 |
Caixa geral de aposentações (11%) |
2145 |
ADSE (1,5%) |
~293 |
Total rendimento líquido |
2 x ( 19500-1015-3563-760-2145-293) = 23448 |
Ora, os resultados estão a vista de toda a gente: apesar de o valor bruto do casal ter passado de 36000 para 39000, a realidade é que o rendimento líquido passou a ser apenas de ~60% (em vez dos 68% de um valor bruto inferior para 2012!). Para além disso, esta é a primeira vez que me lembro de ver um aumento do rendimento bruto ser traduzido numa efetiva redução do valor líquido a receber no final do ano. Para isso, contribuem várias coisas. Note-se como os limites inferiores dos escalões (em 2013, temos um escalão mais baixo com limite superior de 7000 enquanto que o valor 7410 usado na primeira tabela de 2012 identifica o limite superior do segundo escalão da tabela!) e respetivas percentagens aumentam em 2013 (logo no primeiro escalão de 2013, temos uma taxa de 14,5%!).
Na prática, o casal vai chegar ao final do ano com menos 1032 euros, apesar de supostamente receber mais um mês de salário. Como se isto não fosse mau, vamos ter aumentos dos preços de todos os produtos (ex.: eletricidade vai aumentar 2.8%!).
Então, enganei-me nas contas? Espero bem que sim. É que se o cenário não é bonito para este rendimento (que eu considero baixo!), as coisas tendem a piorar para rendimentos mais altos (claro que os desgraçados que ganham menos vão ter dificuldade em se alimentarem, mas que não fiquem dúvidas: quanto maior o rendimento, mais será descontado em valor absoluto).
Para terminar, duas notas:
Comentário final: para quando é que temos medidas concretas que melhorem o país? Para melhorar a justiça, não precisamos assim de tanto dinheiro, pois não?
In the previous post, we’ve started looking at how we can use attributes to improve the metadata of a type. Even though we’ve looked at some attributes and seen how the C# compiler reduces the required typing by allowing us to skip the Attribute suffix, I didn’t really got into details about how attributes are defined.
In practice, an attribute is always a class. CLR compatible attributes are represented by classes which derive, directly or indirectly, from the Attribute class. Whenever we apply an attribute to a type or member, the compiler needs to create an instance of that type. In fact, you’ve probably noticed that the syntax used for applying an attribute is similar to a constructor call (without the new operator). C# allows us to use a special syntax for setting up properties too. The next snippet starts by creating a new custom attribute and shows how you can initialize its properties in C#:
public String Name { get; private set; }
public String MoreInfo { get; set; }
}
[Dumb("Luis", MoreInfo = "Say something!")]
class Program {
As you can see, we’ve started by passing the String used for initializing the private name field required by the constructor (since I didn’t specify a default constructor, there’s no way to create a new DumbAttribute instance without passing at least a string!). After that, I’ve initialized the MoreInfo read/write property by using a pair name/value. The docs use different names for identifying these different types of parameters:
When we don’t need to pass any parameters to an attribute instantiation, then we can simply omit the the parameters like we did in the samples shown in one of the last posts:
In C#, there are several ways for us to apply several attributes to a type or member. We can wrap each attribute with its own square brackets ([ ]) or we can use a single pair of square brackets and separate each attribute with a comma. The next snippet shows both approaches:
Before ending, there’s still time for a couple of observations about custom attributes:
These rules are needed due to the work performed by a compiler when it finds an attribute applied to a type or member. When that happens, the compiler needs emit information into the type’s or member metadata table so that it can create an instance of that attribute at runtime (each parameter is serialized before being stored and that is why we’re limited to those types and we can only resort to constant expressions).
In the next post, we’ll see how we can influence the elements to which attributes are applied. Stay tuned for more.
As you’ve probably noticed, I’ve been a little busy with my new cat. Besides that, I’ve also caught a cold and I’m still lagging in my last work project. Nonetheless, I need to relax and I guess that writing another post on the .NET and nullable value type series is a good way to let off some steam…what can I say? 🙂
In the previous post of the series, we’ve seen how C# simplifies the code needed for working with nullable value types (ie, with the Nullable<T> type). If you’ve been using nullable value types, you’ve probably noticed that they don´t really behave like a “normal” value types. This is only possible because the CLR understands that nullable value types are “special” types and gives them special treatment. Here’s a small example:
So, what should the previous snippet print? Int32? Nullable<Int32>? Well, the truth is that Nullable<T> lies and returns Int32 (instead of Nullable<Int32>). This is just one example that shows that Nullable<T> does, in fact, enjoy special treatment from the CLR…but there’s more:
In the previous snippet, we’re initializing someA (which is an object) with aux (nullable value type). If you recall our previous discussions, you’ll remember that putting a value type into an object will always result in a boxing operation. And that’s exactly what is going on here. When the CLR notices that the nullable value type does, indeed, hold a value, it will automatically box that value. When that doesn’t happen (ie, when aux.HasValue returns false), then the CLR won’t do a thing and someA’s value will be set to null. Grovy, right?
If the CLR can box a nullable value type, then it also needs to perform the reverse operation. In practice, you can unbox a previously boxed T into T or into a Nullable<T> value:
As you can see, unboxing will throw whenever you attempt to convert it to T. That won’t happen when you use T? (ie, Nullable<T>) because we’ve already seen that it’s possible to initialize T? with null. What about interfaces? For instance, if you look at the Int32 type, you’ll quickly notice that it implements the IComparable interface (explicitly). What happens when you need to work to work directly with that interface? For instance, should the following code work?
aux is a Nullable<Int32> instance and Nullable<T> does not implement the IComparable interface. But Int32 does. If there were no special support from the C# compiler and from the CLR, that would mean that the previous code would, at least, need to perform an explicit cast to Int32 (or access the aux.Value directly). In other words, it would make working with nullable value types a little more cumbersome.
These special support form the CLR makes using nullable value types easy and transparent. And this makes me a happier person…it does…and that’s all for now. Stay tuned for more.
As I’ve said, a variable holding a value type value can never be null. The justification for this behavior is obvious: the variable contains the value itself (unlike reference types, where the variable holds a reference to another memory location in the heap). And life was fine until someone noticed that it would be really useful to say that value type variable holds value null. “What?”, you say, “that makes no sense!”. And you’re probably right. In theory, there’s no place for null in value types…at least, not until you need to load a value from a database’s table column which is allowed to have nulls…and that’s why MS introduced the Nullable<T> struct. Lets start with some code:
Running the previous code returns the following results in my machine:
There’s already a lot going on in the previous example:
Nullable<T> is just a lightweight generic struct which wraps a value type field. Besides that field, it will also store a Boolean field which is used for checking if the current Nullable<T> instance holds a valid value. The struct exposes a constructor which receives a T value used for initializing the internal fields. The struct introduces a couple of operators too:
Finally, the Nullable<T> struct overrides the Equals, GetHashCode and ToString methods so that you can compare Nullable<T>s and get a string which better represents its state. If you’re a C# developer, then you’ll be glad to know that there’s a simplified syntax for using nullable value types from your code. And that’s what we’ll see in the next post. Stay tuned for more.
In the previous post, I’ve mentioned that I’d dedicate a post on the topic of formatting. And I thing the best way to start the discussion is to start by looking at the ToString instance method. The ToString is a public and virtual method introduced by the Object class. In practice, this means that it can be used over any instance of any type. By convention, it returns a string which represents the current object, formatted according to the calling’s thread culture. Here’s an example that illustrates the use of this method for printing the value of a double:
As you can see from the comments, changing the current culture from the calling thread results in two different strings. This happens because the Double type overrides the virtual ToString method inherited so that it can provide a reasonable description for its “content”. If it didn’t, then the returned string would only reflect the name of the type:
Btw, it’s a good idea to override the ToString method whenever you create a new type. Overriding this method is also important when you debug an application because the ToString method will also ve called by VS when you put the cursor over an instance of that type or when you add an instance to the watch window (in other words, the ToString method can be overriden to improve your debugging experience in VS).
The main problem associated with the “inherited” ToString method is that there’s no way for the caller to customize the culture used internally by the method. Yes, you can change the thread’s culture, but that is often an overkill operation for getting a string in a specific culture…And that’s why the framework introduced the IFormattable interface. This interface has a single method, which looks like this:
By default, this interface is implemented by most of the base types exposed by the .NET framework. Event enums have it implemented by default…As you can see, the ToString method expects two parameters:
If the type that implements the IFormattable interface doesn’t support the format string received, then it should generate a FormatException exception. Several of the base types that implement this interface are able to receive several format strings. For instance, take a look at the following example:
As you can see, “d” formats the current date in the short date form while “D” uses a long format form. DateTime supports other format strings too: for instance, you can use “U” for getting a string with the current date in the universal time in full date format. There are also some strings which can be used for formatting different types of objects. For instance, you can use “G” for getting a string for an enum value or a number (Int32, Decimal, Double, etc) in the general form:
By default, all objects should serialize themselves in the so called general form. The general form is just a string which represents the most common used format of an instance. As you’ve probably deduced from the previous paragraphs, the general form string should be returned when you pass the “G” format string or null (it’s also a good practice to return the general form string from the override of the ToString method inherited from Object).
Notice that the format string is only responsible for influencing the way that data is presented. For instance, if you’ve got an integer that needs to be represented, that integer can be a quantity. But it could also represent a value in currency. And that’s what the format string parameter does: it specifies the type of information returned in the string.
But that’s only half of the story. For instance, in the next example, I’m saying that money should be represented as currency (notice the “C” format string). Since the currency symbol changes from culture to culture, the ToString method can also receive a second parameter with culture specific info. And that’s what the IFormatProvider parameter does: it can return an object that knows how to format a value according to a specific culture. If you want, you can simply pass null for this parameter. By doing that, you’re saying that all formatting should be done according to the calling thread’s culture (since this is a common scenario, it’s usual for a type to expose an overload of the ToString method which only receives a format string).
As a side note, I had to change the default encoding used in the console output so that I could get an euro symbol printed…
The CultureInfo type is one of the few types that implement the IFormatProvider interface:
You can build an instance of the CultureInfo type for any of the existing major cultures. The easiest way to do that is to pass a string which identifies that culture.It’s that easy! CultureInfo’s implementation of the IFormatProvider is rather simple: it will only respond to the NumberFormatInfo or DateTimeFormatInfo types (and this is because currently the framework will only format numbers and dates).
Each of these types (NumberFormatInfo and DateTimeFormatInfo) expose several interesting properties which are used by ToString for formatting the values (ex.: NumberFormatInfo exposes a CurrencySymbol property which identities the symbol used for the currency associated with the current culture). As you’re probably expecting, the values returned by these properties depend on the the culture specified during the instantiation of the CultureInfo object: internally, the constructor relies on an internal culture table which has all the required info for correctly formatting numbers and dates for most of the existing cultures.
And I guess this is all for now. In the next post, we’ll keep looking at formatting and see how we can influence the way values get formatted. Stay tuned for more.
Today, we’ll keep looking at generics and we’ll see how type inference is used to simplify the code we need to write to invoke a method. Lets start with a simple example:
And yes, you can also define generics at the method level…Now, without type inference, we would have to specify the type of the generic type arguments expected by the method:
The good news is that the MS team added generic type arguments inference. In practice, this means that we can simply call the method *without* explicitly specifying its type arguments:
Now, there’s one important detail which might get you by surprise…Take a look at the following snippet:
What will appear in the console now? If you’re thinking that the type will be System.Int32 like in the previous snippet, you’re wrong. You see, the compiler uses the variable’s data type (instead of the actual type of the object referred by that variable) when it has to infer the generic argument type. Another interesting aspect of using generics with methods is trying to understand how thing works with overloads. What should happen when the compiler finds these method overloads:
Yes, it does compile…but how does the compiler find the correct method for each of the following calls:
As you can see, the compiler will always choose a more specific call over a generic match (and that’s why the first method ends up being invoked for the first call). Notice also that when you explicitly specify the generic type argument, then the compiler is obliged to call the generic method.
And I guess that’s all for now. Stay tuned for more.
Ravanelli was a fantastic cat…today we had to let him go after a liver problem which got really bad in these last couple of days. He died before reaching the impressing age of 17 (which he would have by June). That means that he was with me for almost half of my life (currently, I’m 34). I’ll miss him but that’s life. Thanks for everything Rava!
This is a temporary post that was not deleted. Please delete this manually. (63eb123a-2cb2-4a41-b4e9-fe87c548e273 – 3bfe001a-32de-4114-a6b4-4005b770f6d7)
So, it’s time to wish a Merry Christmas and a Happy New Year!
When we create a new type and define a constructor, I guess there’s nothing wrong with expecting to see that constructor called whenever a new instance of a type is created. The problem is that there are some cases where that doesn’t happen. For instance, here are two situations where your constructor won’t be called:
In the past, I’ve already mentioned the serialization gotcha (you can read about it here and here), so in this post, I’ll only be concentrating on the MemberwiseClone method. The MemberwiseClone method is protected and defined by the Object class (which means that it is inherited by all the types). You’ll use this method whenever you want to perform a shallow copy of an instance.
What you must keep in mind is that when you call this method, the constructor of the type that is being duplicated won’t be called. Instead, the method will simply allocate the necessary memory and bit-copy the instance fields to that new allocated memory space.
Since this is a protected method, you’ll only be able to call it from within the text of the class (or from a derived class). This means you’re in control and it shouldn’t really cause the same problems you might get when you use serialization.
And that’s it for now. Stay tuned for more.
In these last days I’ve went back to ASP.NET world (server side) since I’m thinking in rewriting my ASP.NET book (which was written way back when ASP.NET 2.0 was released and which got a slight updated for .NET 3.5). One of the things I’ve noticed is that now we can change the action attribute of a form and it sticks! According to my investigations, this behavior was changed with the release of .NET 3.5.
Now, you might be thinking on why you’d want to use this attribute…it’s specially cool when you’re using friendly urls (remember: in the past, we didn’t had any url routing engines). Suppose you’re using the old friendly urls introduced by ASP.NET 2.0:
<add url="~/Activities" mappedUrl="~/Activities.aspx"/>
Now, the problem with pre-3.5 releases was that you couldn’t set the Action property and you’d end up getting the Activites.aspx in your browser’s location box. There were some hacks, but….they were simply hacks…With the 3.5 behavior change, those hacks can finally go back into the garbage bin! hurray! :,,)
A few posts back, we started talking about how we can use the DataView control in an imperative way. At the time, we were still working with preview 5. Now that preview 6 is out, we’ll keep discussing this topic, but we’ll be upgrading our code from preview5 to preview6.
As you probably recall, we’ve already removed the declarative instructions from the DataView control; the only thing missing was the HTML code used by the template. In an older post, we’ve seen that template parsing results in transforming the HTML into a JavaScript function which is used for instantiating the nodes when someone calls instantiateIn over a template instance. Our first approach for creating a template through an imperative approach might rely on rewriting the instantiateIn function or the (private) _instantiateIn method (which it built automatically during the parsing of templates defined through markup). This does, in fact work. However, it requires much work (much more than I want to have, believe me!)
Fortunately, there’s another approach (thanks once more go to Dave Reed for pointing me in the right direction): we can handle the itemRendered event of the DataView. We still haven’t looked at the events generated by the DataView control. For now, it’s sufficient to understand that the control will generate the itemRendered after each template instantiation (which is run for each item that comes from the data passed to it). Here’s an example which shows how to do everything from JavaScript code:
<head> <title></title> <script src="Scripts/MicrosoftAjax/start.debug.js"
type="text/javascript">
</script> </head> <body> <ul id="dv"> <li>Top item</li> <li id="putItHere"></li> <li>Bottom item</li> </ul> </body> <script type="text/javascript"> var data = [ { name: "luis",address: "funchal" },{ name: "paulo", address: "lisbon" }, { name: "rita", address: "oporto" } ]; Sys.require([Sys.components.dataView]); Sys.onReady(function() { var helper = document.createElement("UL"); helper.innerHTML = "<LI></LI>"; var template = new Sys.UI.Template(helper); var placeholder = $get("putItHere"); Sys.create.dataView(Sys.get("#dv"), { data: data, itemTemplate: template, itemPlaceholder: placeholder, itemRendered: function(sender, e) { //e represents the datacontext var data = e.dataItem; var node = e.nodes[0]; node.innerHTML = data.name + "-" + data.address; } } ); }); </script>
Lets see if I can explain what’s going on here:
And there you go: as you can see, you’re not forced to “pollute” your markup with templates since you can do everything from script. However, I believe that “HTML pollution” is a small price to pay here, especially when you’ve got nested templates (more on future posts): the simplicity and productivity boost you get from them is undeniable!
And that’s it for now. Stay tuned for more on MS AJAX.
[Update: removed null parameters…after all this is JavaScript, not C#. Thanks, once again, go to Bertrand]
In the previous post, we’ve seen that we can use JavaScript to create a binding between two objects. At the time, I’ve mentioned two important things: bindings are represented by Sys.Binding objects and bindings are also MS AJAX components (since the “Sys.Binding” type expands the Sys.Component type). One of the things that I’ve mentioned was that you needed to call the initialize method to ensure that everything works ok.
As Bertrand said in the comments (and believe me, you should always listen carefully to what he says :),,), you can reduce the JavaScript code by using the $create helper. That means that we can create a new binding by using a simple method call:
<script type="text/javascript"> function pageLoad() { $create(Sys.Binding, { source: $get("source"), path: "value", target: $get("target"), targetProperty: "value", id: "testBinding" }); } </script>
Notice that I’ve also set the id property. That means that we can use the $find helper if we need to get a reference to this binding object later (notice that there’s another option here: you could use a global variable and save the return result of the $create method – don’t forget that the $create always returns a reference to the component it creates).
I do recommend this approach for creating bindings (or any sort of MS AJAX components for that matter). Besides reducing the code, it will always ensure that you initialize the component (something that most people end up forgetting after initializing the properties of the object).
And that’s it. We’ll go back to interesting stuff in the next post. Stay tuned for more on MS AJAX.
In the previous post, we’ve see how we can use one-time/one-way bindings through {{ }} expressions. As I’ve said, MS AJAX also introduces the concept of live bindings. Live bindings are way more powerful than the one-time/one-way bindings we’ve met in the previous post. Unlike those bindings, which rely on simple JavaScript evaluations (through the eval method), live bindings are always represented by Sys.Binding object instances and allow the other binding scenarios we’ve spoken about in the previous post.
Live bindings can be specified through JavaScript (aka imperative approach) or by using a declarative approach. In this post, we’ll concentrate on using the imperative approach and we’ll leave the declarative approach for a future post. Sys.Binding instances are also components because the Sys.Binding type extends the Sys.Component class with the addition of several interesting properties and methods. We’ll start by presenting the most important ones:
These are the most used properties , but do keep in mind that there are more (we’ll come back to them in future posts). The mode property determines how changes are propagated:
So,to introduce this, we’ll reuse the previous textboxes, but this time we’ll create a two way binding between them. Here’s the code needed for this relationship:
<body> <input type="text" id="source" value="Hi from bindings world!" /> <input type="text" id="target" /> </body> <script type="text/javascript"> function pageLoad() { var binding = new Sys.Binding(); //set source binding.set_source($get("source")); binding.set_path("value"); //set target binding.set_target($get("target")); binding.set_targetProperty("value"); binding.initialize();//DON''T FORGET THIS } </script>
Some observations regarding the previous code:
We still need to talk about a couple of properties that influence what happens during the propagation phase, but we’ll leave it for a future post (now I really need to go because it’s time to run my daily 4km 🙂 ) Stay tuned for more.
One of the things you’ll need when you start using JavaScript is iterating through the arguments object. Most of the time, you’ll probably see code which looks like this:
function iterate1() { for (var i = 0; i < arguments.length; i++) { alert(i + ":" + arguments[i]); } }
The previous snippet is simple and lets you iterate over each element. Unfortunately, it won’t work if, for instance, you’re passing it to another function which uses a for…in statement:
function iterate2() { for (var aux in arguments) { alert(aux + ":" + arguments[aux]); } }
Running the previous sample won’t do anything. If there was an easy way to convert the arguments object into an array…and yes, the answer is that you can easily use your knowledge of the array’s API and contexts to build an array with a single line of JavaScript:
function iterate3() { var arr = Array.prototype.slice.call(arguments, 0, arguments.length); for (var aux in arr) { alert(aux + ":" + arguments[aux]); } }
And there you go! By using the call Function’s method, we’re changing the default context. Notice that we need to pass the first position and the number of elements that should be copied in order to get a “real” array with the parameters that you’ve passed to the function. You can use this trick for transforming any array like object into a “real” array.
And that’s it. Stay tuned for more on JavaScript.
Ok, I guess I’ve probably missed the entry, but does anyone know how to change the color of the string text in the JS editor of VS 2010? Thanks…
As we’ve seen, one of the available options is blocking the thread by calling the EndXXX method directly. This might be a good option when you only need to do one or two small tasks and then you need to wait until the asynchronous operation is completed.
In practical terms, this option won’t be usable in many scenarios. However, I’d say that this is the easiest of the four available options because you practically don’t need to make any changes to the existing synchronous algorithm you might be using (and ease of usage is important too, right?). To show how you might use this option, let’s try to get the HTML of a page through an asynchronous request. Here’s some code you might use for a synchronous approach:
var request = WebRequest.Create("http://msmvps.com/blogs/luisabreu",,);
//(1)do something else not related with request
WebResponse response;
try {
response = request.GetResponse();
}
catch (WebException ex) {
Console.WriteLine(ex.ToString());
throw;
}
//(2)or do something else not related with request
//then proceed and do work with info returned from request
As you can see, if we’ve got something that needs to be done and that doesn’t depend on the web request, we must put it before or after the GetResponse method invocation. During the method invocation, the thread is blocked until the current web request ends (which might happen due to a exception or when it gets the response from the web site).
This isn’t what we want in most cases. That’s where we can use the APM to help us do an asynchronous call and (probably) improve the performance of our algorithm. Here’s one of the possibilities for getting the response in an asynchronous fashion:
var request = WebRequest.Create("http://msmvps.com/blogs/luisabreu");
var result = request.BeginGetResponse(null, null);
//doing something else: could be (1) or/and (2) in previous sample
//when they complete, it’s time to block until we reiceve
//the response from the server
Console.WriteLine("async request fired at {0}", DateTime.Now);
WebResponse response;
try {
response = request.EndGetResponse(result);
}
catch(WebException ex){
//just catching webexceptions
//(the method might throw other exceptions)
Console.WriteLine(ex.ToString());
throw;
}
As you can see, we start by kicking the asynchronous task and then start executing other quick non blocking tasks which don’t depend on the data returned by the web request. When we’re done with those simple task, we call the EndGetResponse method.
If the asynchronous request has ended, the EndGetResponse method returns (in this case, the WebResponse instance which contains the response returned by the site) or throws (if there was an exception during the asynchronous request) immediately. If the asynchronous request hasn’t ended yet, then the thread will block until the asynchronous processing ends.
Notice that, in this case, we’re relying on the APM implementation for doing the right thing (ie, to return or throw immediately or to block until the asynchronous request ends).
On the next post we’ll keep looking at the other available options for waiting for the completion of an asynchronous operation started through the APM pattern. Keep tuned for more on multithreading.
Yep, and more photos on what was the biggest snowfall in the last decade!
The unthinkable happened: my wife’s laptop decided to retire itself ahead of time. So, I was left with the task of finding her a replace laptop for the cheapest price I could get. What happened? I’ve ended up buying a new laptop for me, which means that she is now a proud user of a Toshiba Laptop. But this post is not about Toshiba…it’s about my new Asus EEE PC 1000h!
I’ve added more memory (upgraded it from the default 1 to 2 GB of RAM) and I’ve installed Vista 32 bits on it. It’s been running smoothly until now (at least, it’s way better than I had anticipated). Ok, it has limitations, but after installing the drivers, I managed to get 2.7 on the Vista Score (which is not bad for the price I’ve paid for it – a total of 400 euros for the PC + memory + 8GB USB stick for installing Vista). I’ve only had time to install some applications so I still don’t have any feedback on how it will react to more heavy weight use. For now, the only thing I can say is that it’s working ok and I’m really enjoying the battery lifetime and its weight!
On the negative side, I’m still not used to the right shift and the touchpad sucks (you need to go to a gym just for pushing the touchpad buttons!). I guess I’ll have more to say about this computer on the next days…
Today we’ll concentrate on the final process related with the view generation. As we’ve seen in the last post, the WebFormView is responsible for instanting the page (or user control) required for rendering the HTML sent back to the client. Lets start with the ViewPage class…It extends the traditional ASP.NET Page class and it will also implement the IViewDataContainter interface. The IViewDataContainer is really simple and it’s only there for letting the View receive the ViewDataDictionary that contains data passed from the Controller:
public interface IViewDataContainer {
ViewDataDictionary ViewData { get; set; }
}
Besides this property, you’ll notice that the ViewPage introduces several other interesting properties:
When you’re building views, you’ll end up creating new pages that inherit from this class. Do notice that all your pages must end up inheriting from this class. If you don’t do that, you’ll end up getting an exception because the WebFormView class will always try to cast your pages/user controls to ViewPage or ViewUserControl.
The ViewUserControl is similar to the ViewPage class. It doesn’t have a MasterLocation property because you only apply master pages to pages and not to user controls. However, it has other interesting properties. For instance, the class exposes a ViewPage property. It will try to cast the Page property (that you normally find on the UserControl class) into a ViewPage instance. If it works, then you’ll be able to access (for instance) the Url and Ajax properties from your control. If you don’t have a “valid” page, then you won’t be able to access any of those goodies.
If you look at the internals of the ViewUserControl class, you’ll find out that the ViewData property getter is interesting. You have two options here: you can pass the ViewDataDictionary or you can end up reusing the ViewDataDictionary of its parent. Another interesting thing: you can use a different view engine for rendering partial views!
By now we’ve got an idea of what is available on the views and partial views. Before going on, I’d like to make something clear. Until now, I’ve been saying that a view is an ASPX page and that a partial view is a user control. This is what will happen most of the time (you can probably say that it’s the most natural mapping when you think about ASP.NET and MVC). However,you should also keep in mind that nothing prevents you from “promoting” an ascx into a view. In other words,even though most of the time you’ll end up using the model I’ve been refering to (ie, pages are mapped into ViewPages and user controls into user ViewUserControl), nothing requires you to do that. A view can be a user control or a page (and the same thing can be said about partial views).
As we’ve seen, you’ll end up influecing the view that is called by returning an ActionResult from your controller method. But what about rendering partial views? A valid scenario for partial views is dividing your pages into smaller components (user controls) and then placing them on your pages. You can do that in several ways. You can use the traditional ASP.NET approach, where you add a register directive to the page or you can use the HtmlHelper and call its RenderPartial method. Do notice that this last option will let you specify the ViewDataDictionary (or even the view engine) while in the first option you’ll end up using the default ASP.NET engine.
It’s interesting to notice that the RenderPartial methods are implemented as extension methods (as are several of the HtmlHelper public methods). We’ll return to them in a future post.
Before ending, there’s time to speak about two other classes. If you want to get a typed view page or view user control, you should use the ViewPage<T> and ViewUserControl<T> classes. These classes expand the previous classes and “transform” the ViewData property into a strongly type property of type ViewDataDictionary<T>. This means that if you’re passing a strongly typed object to your view, you’ll be able to access that object in your view (or partial view) through the ViewData property and access its properties directly.
On the next post, we’ll start looking at the helper classes. Keep tuned!
I’m still not understanding why it won’t give me a warning when I create an internal class with a public method. Here’s an example:
class MyInternalClass{
public void Test(){} //no compiler warning
}
Ok, at the end of the day, Test is really an “internal” method since the acessibility of a member can never be greater than the one of its containing type. But why can’t I get at least a warning? In this case, setting the method type to internal should be the “maximum” accesibility,right? There are other similar scenarios where we get the correct response from the compiler. For instance, you cannot declare a protected method on a struct. This makes sense, of course, because you can not inherit from a struct (therefore, you really don’t need to declare protected methods).
So, here we are: on the one hand, the compiler will let me declare a method as public on an internal class, even though that method will only end up beeing used on the assembly where it is defined due to the accessibility of its containing type without even generating a warning. On the other hand,the compiler won’t “relax” the protected method error on a struct type.
I’m not a compiler geek (or even a C# language expert) so the problem might really be on spec and not on the compiler. Anyway,I think that the compiler could at least generate a warning on these scenarios (internal classes with public methods). And what do you think?
In these last 2 days I’ve been looking at the internals of the System.Web.Routing assembly. I’ve thought about writing some posts with several notes on how it works so that I have a future reference when I need it. Putting it here on my blog will make it easy to find these notes and it may even help the guys that are starting out (but that are a little behind me right now).
In this post, I’ll just present the basics (and there really isn’t much to say, believe me). The routing assembly is based on three or four basic types + a module which is responsible for intercepting the requests and performing the mapping magic. But lets start with those three or four basic types…
The RouteBase type is (arguably) one of the most important types you’ll find on this assembly. It’s main objective is to define the contract that all routes must implement (normally, you’ll end up using the derived Route class in your programs). Currently, the class defines the following contract:
public abstract class RouteBase
{
protected RouteBase();
public abstract RouteData GetRouteData(HttpContextBase httpContext);
public abstract VirtualPathData GetVirtualPath(RequestContext requestContext, RouteValueDictionary values);
}
As you can see, each concrete route type must implement two methods: GetRouteData and GetVirtualPath. The first (GetRouteData) is used (indirectly) by the UrlModule (which will be presented in a future post) to get info associated with the current request. The RouteData object returned by this method has all the necessary info about the current route. At this time, it’s important to note that if none of the pre-registered routes (you’ll see more on this when we talk about the RouteTable class) return a valid value from this method, then the routing framework won’t do anything and the request will end up beeing handled like if the routing platform didn’t exist.
Now, when there’s a route which matches the current url request and you have a valid RouteData object, that instance will be used for getting an IRouteHandler associated with the current route. The IRouteHandler has the following signature:
public interface IRouteHandler
{
IHttpHandler GetHttpHandler(RequestContext requestContext);
}
As you can see, the main objective of an IRouteHandler is to return and IHttpHandler that will process the request. Notice that the GetHttpHandler will receive an instance to the current RequestContext,which should be propagated to the handler that will end up processing the request. This class (RequestContext) has two properties which will give you the current RouteData and the ASP.NET wrapped context (property HttpContext – for more info on these wrappers,check my previous post on the subject).
It’s now time to look at the GetVirtualPath method exposed by the RouteBase class. The main objective of this method is to return an instance of the VirtualPathData associated with this request. You won’t normally be using this method if you’re going to use the MVC approach. On the other hand, it might be really important if you want to use the routing module with the Web Forms approach (just download the MVC preview code and take a look at the Futures folder).
Ok, so in this post we’ve already introduced several topics:
In the next post, we’ll look at the use of routes and route tables.