What’s wrong with ASP.NET? Validation

ASP.NET introduced a fancy new user input validation framework that, at least at first glance, appears to be a great advance over the complete lack of built-in validation support in ASP.OLD. Declarative validation is certainly wonderful stuff, and getting client-side validation with no additional effort (at least if your clients are using supported browsers) isn’t too shabby either. Overall, using the built-in validation controls certainly seems like a good idea, particularly for those folks who wouldn’t be performing any validation otherwise because of the amount of work involved.


But what about those of us who had been performing validation all along? Do the ASP.NET validation controls really offer equivalent protection to our former “manual” validation? Unfortunately, for many of us, the answer is probably “no”. The main problem lies with the validation paradigm chosen for ASP.NET, which has the following properties:



  1. Each validation control operates independently of the others.
  2. The validation process does not output a validated value.

So what’s wrong with that, you ask? Well, each of the validation controls needs to re-read the target control value, as does the code that will ultimately process that value. To make the problem a bit more concrete, let’s look at the example of validating a birth date provided via a text box:
























Validation rule Validation control
1 Value is required. RequiredFieldValidator
2 Value must be parseable to a valid date value without any time portion. CompareValidator
3 Date may not be in the future or more than 130 years in the past. CompareValidator
4 If the value indicates an age of less than 10 years or greater than 90 years, the user must provide confirmation that the value is correct (as opposed to a data entry error). Custom control for soft validation


In the above example, each validation control reads the text box value independently, which means that they cannot build upon parsing and restriction steps performed by the other validators. In addition, when our code needs to process the birth date, it must go back to the text box and read the string value again. That mean that there are at least 5 reads and 4 separate parsing operations from the original text box value, each of which represents an opportunity to parse inconsistently with respect to culture and/or format. In addition, there’s no guarantee that each of those 5 reads is even accessing the same value since it’s possible for the text box’s Text property to be altered between the reads (even if this is somewhat unlikely in an ASP.NET app).


How can we address this problem? There are plenty of workarounds, including using only custom validation controls, but that means quite a bit more work for developers. A much more reliable approach would involve reading the “raw” value from the text box only once. It would then be passed from one validator to another in a pre-defined order and would only require a single parsing. In the birth date example, the validator at step 2 would output a strongly typed DateTime value (assuming, of course, that step 2 validation passed), and that’s the value that would be passed to the validator at step 3. The output from step 5 would be the value that our code would read, rather than going back to the text box to read its Text property.


It’s entirely feasible to develop such a framework, but it’s also quite a bit of a lot of work, particularly when one considers addition of appropriate design-time support. It’s obviously not the sort of thing on which most shops would want their developers spending time. Unfortunately, it’s also not the sort of thing that many shops would be willing to pay for as a third-party product, particularly given that most developers probably perceive themselves as getting by just fine with the existing ASP.NET validation controls.


On the other hand, it would probably be a reasonably small project on the Microsoft scale of things, and I suspect that I’m not the only one who would sleep better if the built-in tools would help the Morts of this world develop more reliable and secure applications out of the box, particularly given that most of us use applications built on those tools sooner on a more or less regular basis.

What’s wrong with ASP.NET?

For quite some time now, I’ve been harbouring an increasing bit of frustration with ASP.NET. Overall, I like the platform, and I think that it’s a great advance over ASP.OLD. Unfortunately, there are a few areas in which I can’t help but feel that the design team missed the boat by just a wee bit too much, and compensating for these lacunae can mean a ridiculous amount of work for the individual developer.


There are three main areas that have been grating on my nerves of late:


  1. User input validation
  2. HTML encoding
  3. Culture usage

I’d love to see the ASP.NET team address fundamentals like these in upcoming versions, but new features generally garner considerably more interest amongst users than fixing things that very few people ever realized were broken. In an attempt to perhaps rouse a bit of interest in the latter, I’m starting a short series of posts on the above topics, starting with What’s wrong with ASP.NET? Validation.

New version of ImportsSorter add-in available

There’s a new version of the Bordecal.ImportsSorter add-in available for download. The only change is a fix for a bug that raised its ugly head when the shortcut menu was polled by another add-in. The hashes for the new MSI file are:


MD5: b91a7abde826173a0d7c9f5e05126b35


SHA1: 1b54e49189a921a07eecca82749860b6f9e41d7a

Hopping databases from the SAFE SQLCLR permission level

I’ve seen quite a few articles over the past few months that make the assumption that one can only connect to the hosting database from SQLCLR code running at the SAFE permission level. I can’t seem to find any official MSDN documentation that would directly reinforce this misconception, so I’m guessing that it stems from the limitation of the SqlClientPermission at the SAFE level to only allow use of the following connection strings (with optional specification of the Type System Version parameter):

context connection=true
or
context connection=yes

Unfortunately, the documentation for the SqlClientPermission.Add method is a wee bit ambiguous with respect to the effect of preventing arbitrary target database specifications in the connection string, and one might easily be led into believing that preventing use of the database parameter will prevent connections to unintended databases. However, while it will prevent mucking about with the connection string, that’s not enough to prevent connecting to other databases.


For starters, the SqlConnection object has a ChangeDatabase method that allows one to target another database after an initial connection has already been established.e.g.:1

using (SqlConnection connection = new SqlConnection(@”Data Source=(local);Initial Catalog=AllowedDB;Integrated Security=True”))
{
connection.Open();
connection.ChangeDatabase(“ForbiddenDB”);

using (SqlCommand command = connection.CreateCommand())
{
command.CommandType = CommandType.Text;
command.CommandText = “SELECT DB_NAME()”;
Console.WriteLine((string)command.ExecuteScalar());
}
}


Now, one might argue that this is actually a bug, and that ChangeDatabase method ought to demand SqlClientPermission for the target database before making the switch. However, it’s quite possible to bypass the SqlClient layer entirely and make the switch inside database code, so any additional protection at the SqlClient level would only provide a false sense of security and probably isn’t worth implementing.


The next approach invokes making a direct database context switch from T-SQL using the USE statement. e.g.:

using (SqlConnection connection = new SqlConnection(@”Data Source=(local);Initial Catalog=AllowedDB;Integrated Security=True”))
{
connection.Open();

using (SqlCommand command = connection.CreateCommand())
{
command.CommandType = CommandType.Text;

command.CommandText = “USE ForbiddenDB”;
command.ExecuteNonQuery();

command.CommandText = “SELECT DB_NAME()”;
Console.WriteLine((string)command.ExecuteScalar());
}
}


Effectively, this means that SqlClientPermission provides no protection against using any particular database within a given SQL Server instance. You might guess that the SQLCLR might add some additional protection against database switching from within hosted code, but you’d be wrong. The above techniques work just as well against the SQLCLR context connection as they do against a plain, old vanilla connection as shown above. SAFE or not, SQLCLR assemblies can connect to any database in their host SQL Server instance assuming, of course, that user permissions also allow the connection.






1 The DB_NAME function, when called with no parameters, returns the name of the current database. If you haven’t switched the context database, the function would be expected to return the name of the database against which the connection was originally established.

Why is my application coughing up a SecurityException after my code stops running?

Odd exceptions at odd times


If you apply a PrincipalPermission attribute to a class in order to restrict the users and/or roles that are permitted to use the class, you may start seeing security exceptions like the following being thrown at unexpected times (like, say, when your application is quitting):

System.Security.SecurityException was unhandled
Message=”Request for principal permission failed.”
Source=”mscorlib”
StackTrace:
at System.Security.Permissions.PrincipalPermission.ThrowSecurityException()
at System.Security.Permissions.PrincipalPermission.Demand()
at System.Security.PermissionSet.DemandNonCAS()
at YourNamespace.YourClass.Finalize()


What’s up with that?


The basic gist of the above exception that the demand for your specified PrincipalPermission is failing when the finalizer for your class is invoked. If your class also happens to be disposable, and disposition suppresses its finalization, you might be tempted to believe that this problem occurs because the thread principal couldn’t satisfy the original PrincipalPermission demand at construction. However, things are a wee bit more complicated than that…


Finalization is triggered by the garbage collector and runs on a separate thread controlled by the garbage collector. This means that the principal that you set on your application’s thread won’t be applied to the thread on which an instance of your finalizable type gets finalized. Your PrincipalPermission demand will always fail at finalization, regardless of the thread principal set within your application.


Another surprise might be that there’s an object available for finalization at all if the PrincipalPermission demand fails when the constructor is invoked. What you actually end up with in such a case is a partially constructed instance of your type. This instance won’t be available to the invoking code, so it won’t be subject to disposition, but the garbage collector will still attempt to finalize it despite the fact that it isn’t fully constructed.



So what can I do about all this?


The simple answer is you need to make it possible for the finalizer (and any other methods it calls) to run despite the fact that the finalizer thread cannot satisfy the class-level PrincipalPermission demand. You can do this by applying a PrincipalPermission attribute that allows unauthenticated callers to the finalizer (and any methods it calls). The C# form of this attribute would be:

[PrincipalPermission(SecurityAction.Demand, Authenticated = false)]

The VB form would be:

<PrincipalPermission(SecurityAction.Demand, Authenticated:=False)>

Obviously, if you’re going to be removing the requirement for the class-level PrincipalPermission from the finalizer or any other methods, you should also ensure that these methods don’t perform any actions that should require whatever user identity or role membership specified by the original PrincipalPermission.


You may want to also consider applying the same PrincipalPermission reversal on any methods used for disposition, even if these are not invoked from your finalizer. (This would be a bit of an odd design choice in most cases, but if that’s what you’re using, you should be addressing the consequences.) The main reason for this is that disposition might not be invoked under the same principal as was in place at construction. As with finalization, you should ensure that your disposition methods don’t perform any “high-privilege” actions if you do choose to reverse the PrincipalPermission requirement.



If that was the simple answer…


The good news is that the above approach is pretty much the only approach. It’s reasonably clear-cut, and there’s not much that you can do in terms of variation on the theme. The bad news is that, if you’re running into this particular problem, chances are pretty good that you should perhaps be concerned about some of the finer details of finalization and disposition. If you’re interested in learning more about these, I’d recommend reading Chris Brumme’s blog entry on finalization. If you’re implementing disposition as well as a finalizer, you should probably take a look at the recommended disposition pattern, and maybe even the whitepaper on .NET Framework Resource Management.

Secure by de…what?

Surprise!


User instances are a new capability of SQL Server 2005 (Express edition only) that are supposedly intended to allow non-admins to attach database files without requiring additional permissions. This actually works just fine and, at first glance, it probably strikes most folks as a lovely least-privilege accomodation. The unfortunate bit that might not be immediately obvious to the casual user is that this is accomplished by granting the connecting user sysadmin privilege over his user instance. This means that every connection to a user instance is a connection running as sysadmin.


So… What’s so bad about connecting as sysadmin?


If you’re at all familiar with secure practices around database connectivity, you’ve probably heard that you should never connect under a sysadmin login unless you’re connecting for the express purpose of performing administrative tasks. The main reason for this is that a sysadmin login has unlimited control over the SQL Server instance, as well as being able to “climb” out of the SQL Server instance via extended stored procedures (or hosted SQLCLR code, in the case of SQL Server 2005) to affect other machine resources. In other words, code running under a sysadmin login can fully control the SQL Server instance and can do anything on the machine or network that either the login account or the SQL Server service account can do. It’s also possible to impersonate other Windows accounts when calling outside SQL Server, so the damage potential isn’t necessarily limited by the privileges of the login and service account.


Yikes! But, ummm… Yikes!


Hmmm… Sounds like running user instances might be just a wee bit on the risky side, doesn’t it? After a bit of a stumped initial reaction, the little voices in my head started evaluating the implementation against SD3+C (“secure by design, secure by default, secure in deployment, and communications”) criteria, which is supposed to be an integral part of the Microsoft security development lifecycle. I can’t help but feel that some less risky choices might have been made along the way, but perhaps that’s just my paranoid nutbag side talking. You decide…



Secure by design?


The main goal of user instance mode seems to be allowing applications to attach database files even when running under a limited privilege user account. That’s pretty necessary if you’re going to, say, push user instance mode SQL Server Express as a replacement for Jet. That said, might some safer design choices have been made when choosing how to implement this requirement? This would allow even dbcreator membership to be revoked when it isn’t actually needed, which could be the case if one were to configure the user instance template data files to pre-connect to a designated set of databases.



  1. Does the connecting user really need to be a sysadmin?
    Probably not. Membership in the dbcreator role would probably have been quite sufficient for the purposes of attaching database files without invoking additional risks around control of the instance configuration and allowing code to call out of the database. However, a potentially more interesting design might allow a true sysadmin of the master SQL Express instance to designate the role membership of a user instance creator.


  2. Is the connecting user account really the best choice for the service account?
    On the surface, choosing to run the user instance under the connecting user’s account might actually seem to be a good choice. After all, it ensures that code run within the user instance can’t do anything that the user himself can’t do (unless, of course, impersonation is used). However, if you turn things around a bit and assume that an attached database might come from a less than ideal source (say, passed around from one user to another, all of whom act as dbo and sysadmin while the database is in their hands), running with the full privileges of the connecting user all of a sudden doesn’t sound so good…

    Could another choice have been made here? Granted, there are some challenges around designating the permissions granted to any alternate account. However, one obvious possibility would be to allow a master instance administrator to designate per-user service accounts for user instance mode. As with master instance service accounts, such a mechanism could automatically assign the minimal user permission set required for service operation, thereby reducing the administrative burden. A configurable design could also allow for enabling/disabling user instance mode by user (with disabled as the default state for a properly “secure by default” design).


  3. Do user instances really need the full functionality of stand-alone instances?
    If the true purpose of user instances is to permit applications to attach local database files, why include any functionality beyond what’s needed to act as a pure database server? Do such applications really need to be able to run extended stored procedures like xp_cmdshell? If not, why include it at all?


  4. What CAS permissions ought to be assigned to assemblies hosted in an attached database?
    Unfortunately, all assemblies hosted by the SQLCLR are assigned local zone evidence, which means that a database loaded from a remote location (either with an application loaded from that location or as an attached remote database) will be granted unrestricted CAS permissions under default CAS policy. In order to prevent remotely sourced applications from escalating their own CAS privilege via this mechanism, the SQLCLR probably ought to assign zone evidence based on database source locations in a manner similar to what the stand-alone CLR does for assemblies.

Secure by default?


Well, it looks like someone did at least give this one the old college try. For example, regardless of the master instance setting, a user instance will have xp_cmdshell use disabled by default. Unfortunately, it’s trivial to enable the option from within any application connected to a user instance since the user is running as a sysadmin, so this is essentially just a bit of cosmetic cover-up.


Given the current design, the only real “secure by default” setting that I can see would be to deploy SQL Server Express with user instance mode disabled by default. Since most machines on which the Express edition will be installed will likely never need to run user instances, it’s really rather disappointing that it’s enabled by default in the first place. Then again, this is an obvious ease of use vs. security trade-off, and it’s not exactly difficult to imagine the meeting at which the decision was made…



Secure in deployment?


There’s little an administrator or user can do to make user instances more secure if they’re enabled. There appears to be no information at all out there about the risks of their use, forget about guidance on “how to use it securely”. We’ll have to wait to see if updates will be easy to deploy, but updating all user instances on any given machine will certainly pose some potentially interesting challenges.



Communications?


Well, I guess we’ll see… 😉


Ouch! Band-aids, anyone?


If you need to install the SQL Server Express edition and want to protect yourself against these risks, there are a few things you can do. For starters, unless you absolutely need user instances, disabling them would probably be a really good idea. This can be done by executing sp_configure against the master SQL Express instance on a machine as follows:

sp_configure ‘user instances enabled’, ‘0’
GO

RECONFIGURE
GO



Developers who distribute SQL Server Express edition with their applications might also want to keep this in mind. If you don’t use user instances in your application, you should probably disable them as part of the installation. Also, given the risks involved with running user instances, you might want to consider avoiding their use if at all possible. (BTW, if you’ve installed Visual Studio 2005 on your machine, there’s a good chance that SQL Express edition was also installed, and you might want to take a little break from reading this in order to run off and disable user instances.)


So, that’s all fine and dandy if you don’t need user instances at all. What happens if you really need to run an application that uses user instances? For starters, you might want to limit which users can create user instances. Unfortunately, as far as I know, the only way to do this at present would be to remove user permissions on the directory created for a user instance. In other words, for any user to whom you wish to deny user instances, you would need to create a %USERPROFILE%\Local Settings\Application Data\Microsoft\Microsoft SQL Server Data\SQLEXPRESS folder, then remove the user’s NTFS permissions on the folder. Since this is a major pain in the caboose, as well as easy to miss doing for any given account, it’s the sort of thing you might want to consider automating via a default login script or similar mechanism. BTW, if you do make this permission alteration, other processes such as backups may be affected, so you might want to do some pretty thorough testing before, say, pushing this sort of thing out to your entire domain…


What about CAS permissions?


Sorry, but CAS isn’t going to help much here if you allow connections to a user instance. Code with any SqlClientPermission can do anything the connecting user is allowed to do via the SQL Server instance. When connecting to a remote instance (or even a local non-user instance), the user’s capabilities are usually (or so one would hope!) constrained by their NTFS permissions, SQL Server permissions, and limitations imposed by the configuration of the SQL Server instance. However, running as sysadmin on a user instance, these contraints are mostly absent. If you grant any SqlClientPermission to managed code that permits connection to a user instance, you are effectively granting permission for that code to do anything the user can do. The end result for a malicious application is the same as if you had granted unrestricted CAS permissions (aka “full trust”). In other words, you shouldn’t be granting SqlClientPermission that includes the possibility to connect to a user instance to any assembly unless you would happily grant unrestricted permissions as well.


This means that granting unrestricted SqlClientPermission to any code (other than as part of a full trust grant) is a pretty horrible idea. Unfortunately, if you want to grant “almost unrestricted” SqlClientPermission that excludes the right to connect to user instances, the CAS permission configuration UIs won’t be of much help. Instead, you’ll need to define the permission “manually”. The XML definition for such a permission might look like this (watch out for fakey angle brackets if you copy and paste):

‹IPermission
class=”System.Data.SqlClient.SqlClientPermission, System.Data, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089″ version=”1″
AllowBlankPassword=”True”›

‹add KeyRestrictions=”User Instance=;” KeyRestrictionBehavior=”PreventUsage” /›
‹/IPermission›


If you want to grant additional permissions to a network-sourced assembly so that it can connect to a SQL Server instance running any server on your network, I’d recommend you use something like the above permission rather than an unrestricted SqlClientPermission grant. Otherwise, you might unwittingly be granting that assembly essentially unrestricted permissions over the machine on which it’s executing via code run within a user instance.


Wrapping things up…


In my opinion, SQL Express user instances just plain don’t meet the SD3+C bar, and disabling them is probably the best way for most of us to protect ourselves against the risks they introduce. Then again, I am a something of a paranoid nutbag, so your mileage may vary greatly… 😉

New version of Bordecal.ImportsSorter add-in available

Once I actually started using my imports sorter add-in from the Larkware 2005 Developer Tool Programming Contest on a semi-regular basis, there were a few things that started irritating the heck out of me (and were presumably worse for everyone else). Finally found a bit of time to fix and test things, and a new version is available for download here, with updated documentation here. The new stuff includes:



  1. Option to not display the imports/using block after sorting.
  2. Option for duration to display the imports/using block after sorting, if display is enabled.
  3. Removal of whitespace from between imports/using directive lines. (This avoids an irritating problem with extra trailing lines that occurred when re-sorting the imports/using block with inter-group lines inserted.)

If you use the add-in and find a problem, just give me a shout at calinoiu@gmail.com

Speculations on the suprisingly under-documented world of SQL CLR CAS permission grants

I’d been hoping that the details of the SQL CLR CAS permission sets might make it into the SQL Server Books Online or other relevant documentation by the RTM timeframe. Unfortunately, I can’t seem to find anything that even begins to resemble a listing of the permissions, never mind coverage of some of the pickier details of their assessment and consequences. I’d already started trying to investigate some of this on my own during the beta and, after spending a bit more time with the RTM build (i.e.: pretty much wasting a perfectly good Saturday), here’s what I think I’ve discovered so far… (Click here to read the whole kit and kaboodle.)

Quality attributes for cheapskates

The ISO what?


For quite some time now, I had been assuming that that the ISO/IEC list of software quality attributes is pretty much common knowledge, at least amongst the architect/analyst crowd. It turns out I was wrong. After a first hint in this direction a few weeks ago, I started asking around to see if folks had ever heard of the list. Despite some deficiencies in my sampling methodology (which pretty much entailed asking folks about it whenever I happened to think of it), the 100% blank stare response rate pretty much settled that question. At least it gave me something other than the fully trusted GAC to blog about… 😉




Momma and Poppa Bear


For those of you who might care about such things, this quality attribute stuff comes out of the software and system engineering sub-committee (SC 7) of the joint ISO/IEC tecnical committee on information technology (JTC 1). The relevant published standard is ISO/IEC 9126, which currently ships in 4 parts, all of which are listed under (surprise!) the standards published by JTC 1/SC 7. If you want to buy copies of these beasties, you might want to check the list of international member stores rather than buying directly from the ISO mother-ship and letting your credit card company profit off a conversion from Swiss francs.




What if I don’t have buckets of money to spend on this stuff?


OK, time for a small confession. I’ve never forked over money for the full standards either. My bad, although I do keep hoping to eventually work on a project that would merit it. In the meantime, I’m still using the quality attribute list even though I’m blissfully ignorant of the full methodology outlined in the standards, and there’s no reason you can’t do the same. The simple fact is that many/most of us don’t work on applications that are likely to merit the full methodology anyway, and development of such applications usually tends to be supported by QA teams that get paid to worry about this stuff on a full-time basis. 😉


If you want to get a copy of the quality attributes list without shelling out the big bucks, a good place to start is Wikipedia. Unfortunately, the Wikipedia list doesn’t include descriptions of the subcharacteristics, but you can pick those up from the EAGLES-I report. One thing to watch out for is that these lists are not all-inclusive. For example, since both are based on the 1991 release of ISO 9126, they are missing some subcharacteristics that appear to have been added in the ISO/IEC 9126-1:2001 release. At least a partial list of these can be seen at http://www.hostserver150.com/usabilit/tools/r_international.htm#9126-1. Also, you should keep in mind that the list was never meant to be exhaustive, and you may find that you need to make your own additions. For example, I like to add an “accessibility” subcharacteristic under “usability” since, even though it’s sort of implied under “operability”, I want to make sure that it gets considered in its own right when using the list.




So what the heck is this thing good for anyway?


So, now that you’ve got the list, what are you going to do with it? Obviously, it’s not suitable as a guideline for everything that ought to be done in any given piece of software. If you tried to fully implement the whole kit and kaboodle in any single application, you would probably still be working on the beast when the heat death of the universe eventually rolls around.


Personally, I like use the list more or less as a checklist of things to discuss with the client during the requirements gathering phase of a project. Ideally, I like to run through at least two rounds of such discussion, preferably in meetings attended by all the key decision makers on both the business and technical sides. The first round takes place at the beginning of the requirements gathering process and, while it can help identify non-functional requirements with major project impacts, it is intended mainly to familiarize the business-side folks with the quality attibutes so that they can consider them as they develop their functionality wish lists. The final (or as final as can be) decisions around the quality attributes would be made at the second round of discussions, which would take place towards the end of the requirements gathering process, once at least the broad strokes of the functional requirements are presumably known.

Heads, you lose. Tails, you lose.

Finally wrapping up my rebuttal of Shawn’s listing of reasons for forcing full trust of assemblies in the GAC…



6.a) “Based upon the assumption that GACed assemblies are receiving FullTrust, tools such as NGEN can have simpler code paths around security.”


Not too many users of the platform are likely to lose sleep worrying about how complex Microsoft’s private implementation of such tools might be. If any given feature is too difficult to implement without eroding the security protections offered by the platform, dropping the feature might be a better solution than dropping the protection. Of course, this only holds true if one values security over the feature in question.



6.b) “And reducing complexity in code paths that involve security helps to reduce the risk of bugs, which is a very good thing.”


Is it really necessary to sacrifice our ability to protect ourselves against similar bugs in the GACed assemblies distributed by both Microsoft and others simply in order to reduce the risk of bugs appearing in a much smaller set of supporting tools? Is Microsoft not concerned that the many GACed assemblies it distributes (and not just as part of the Framework itself) might be potentially just as buggy as these tools?



On the more general side of things…


It seems to me that this point is the actual reason Microsoft wants to force full trust in the GAC. The other points strike me as being little more than justifications that Microsoft is attempting to use to convince their clients (and maybe themselves?) that we shouldn’t object to the change. Since I don’t find the other five arguments to be even slightly compelling, I’d like to examine this “tools gains” issue in a bit more depth…


Last week, an article by Reid Wilkes covering new NGen features became available on the MSDN Magazine site. I had already suspected that the change in CAS behaviour might be the result of a security vs performance trade-off, and this article rang quite a few bells for me despite the fact that it made no mention of security (or at least not any that I could find).


Assuming my guess is right, and the forced full trust of GACed assemblies is meant to support improved performance via reduction/elimination of runtime permission verifications in extended NGen scenarios, a couple of reasonable questions might be:



1. Why move in this direction at all?


The performance vs security trade-off is nothing new. Presumably quite a few decisions were made during the design of the v. 1 .NET Framework regarding where and how to strike the balance between these two quality factors. What could have changed to make things swing the other way? Has Microsoft has been receiving vociferous and frequent complaints about .NET performance? Has it been too difficult to sell folks on the idea that improved security comes at a cost? Is this due to nothing more than altered composition of the relevant product teams, with different people making different choices?



2. Is there no alternative that might satisfy both those who want enhanced performance and those who want to maintain full CAS functionality?


I can think of at least a couple of potential options:


a) CAS can already be disabled entirely, thereby allowing runtime stack walks for permission verification to be avoided. This is a client-controllable option that can be used to improve runtime performance. Why not tie assumed GAC full trust to this option, or some similar switch to be introduced for this purpose only? (Or might this sort of thing be the “complexity in code paths that involve security” that Microsoft is attempting to avoid?)


b) Accept that at least some performance and security goals may be incompatible. Instead of trying to accomodate all potential users with one platform that is too slow for some and too insecure for others, focus on the more generally important goal (security, of course ;)) in the generally distributed Framework and release an alternate framework for the other crowd.


Of course, (a) may be more complex, and (b) might be more expensive, but implementing security is rarely both easy and cheap.