EF Challenges and Ramblings

I’ve recently been working on a project where I used Entity Framework for the data access.  I’ve run into some challenges that don’t seem to be discussed much in the articles on how to properly use ORM.  I’m curious if anyone else is having these challenges.  While I’m using EF as my base I’m pretty sure that these challenges exist for other ORMs as well including NHibernate so I don’t believe the challenges to be caused by a specific ORM implementation but rather as a result of using ORMs in general.


General Architecture


Before discussing the challenges I think it is prudent to explain the rationale of the architecture.  For this project the database was not directly accessible to the application.  The database contained data used by multiple applications so each application had to go through a common data library (CDL) to ensure consistent validation and behavior.  The CDL consisted of 3 layers – domain, data and entity.  The domain layer contained business objects and handled the business rules.  Applications used this layer for interacting with the database.  The data layer was a generic data model that the domain layer used.  Domain types basically wrapped the data model and provided the core functionality of the business.  The entity layer was where EF resided and was nothing more than the configurations needed to get the data model working.  Code first was used but the database schema was managed by another team so it was never necessary to replicate the database schema fully.   


EF was hidden inside the CDL.  Since applications never directly talk to the database it didn’t make a lot of sense to expose EF.  An application can freely use whatever ORM it chooses while the CDL can be updated over the life of applications.  There are arguments on both sides as to whether the ORM should be exposed directly to the application.  My opinion is that sometimes it makes sense to expose the ORM directly and other times it makes sense to hide it.  It really depends upon the application that is being built.  Ultimately I don’t think the determination as to whether the ORM is directly accessible or not really impacts the challenges I’ve encountered.  For this project it made sense to hide it.  The unit of work (UoW) pattern was used to manage changes in the system.  Additionally repositories in combination with queries were used to access data.  The query objects worked with the data model directly to retrieve data and the repositories were responsible for creating the domain object wrappers.  One of the benefits of this approach was that the CDL exposed a lot of core functionality but applications could create new queries without really getting involved in how it all came together.


For purposes of this discussion let us assume a simple data model where Employee represents an employee.  Employee has some basic properties like Name, Pay and Manager.  The Manager property is a navigation property that represents the employee’s manager.  It is of type Employee.  Employee also has a navigation property called Subordinates which is a collection of Employee instances that report to them.


Implicit vs Explicit Updating


Most ORMs allow you to make changes to the underlying data and have the changes saved to the database when the UoW is saved – implicit updates.  But this can introduce unexpected behavior and complicates validation.  The standard approach is to create a base type that represents the data, such as Employee.  Under the hood the ORM is called to return the data and a proxy (from the ORM) is returned.  The proxy contains the data along with housekeeping stuff such as change tracking, relationships, etc.  The application uses the base type but in reality is working with the proxy.  When a value on the type is changed the underlying proxy marks it as modified.  Eventually when the UoW is saved any modified proxies are also saved.  This introduces a problem though.  If an application makes some changes to an object but doesn’t want to persist those changes then there is no way to “undo” the changes.  This is most prevalent when dealing with multiple entities (some of which you may want to save) and when dealing with cancellations.  If the proxy data changes then the data is saved when the UoW is.  This can result in unexpected behavior if an application does a “what if” scenario on an object and then later saves the UoW (perhaps in another component). 


Another problem with implicit updating is validation.  While some validation can be done at property set time, most validation requires evaluation of the entire entity and in many cases this has to be postponed until all properties are set.  In general the order of setting properties shouldn’t matter.  So it becomes the responsibility of the application to ensure that an object is valid before the UoW is saved.  For example in the case of Employee an employee cannot have a manager who is a subordinate of the employee.  While Employee could potentially detect this it would require that the properties be set in the “correct” order.  One solution that is often used is to hook the UoW’s save method and do validation there but it isn’t that elegant.  In the simplest case the save method has validation code in it.  Even with some interface to move the validation to the object itself the UoW still has to validate the object.  A bigger challenge is dealing with the validation errors.  In most cases the UoW is saved in a higher level type after all the work has been done.  The application has to assume validation may fail and provide some sort of generic error handling rather than letting the code that made the changes deal with the validation errors.


An alternative approach is to require an explicit update command from the application – explicit updates.  An application requests the data, makes the modifications and then updates the data.  The update process doesn’t actually save the changes but rather does the validation to ensure the save should work.  While it seems a little odd it works better in my opinion.  Even better is that the object is no longer responsible for validation outside its own properties.  In most cases an object is part of a larger collection of entities so the collection is responsible for validation.  Assume for a moment that an employee’s name must be unique.  The name (assume first, middle and last name properties) could be set in any order so it clearly cannot be validated by the Employee object.  The validation would need to occur by the type that manages the list of employees.  Without an explicit update method the employee could be invalid.  The validation error would not be detected until the UoW is saved.  Even worse is that there is no way to “undo” the name change to allow the save to complete anyway, such as if the user cancelled the request.  Additionally an application has a better idea of where validation errors will occur so they can code accordingly.  Of course errors can still occur when the UoW is saved but in most cases the explicit update step will catch the errors.


Explicit updates introduce a big challenge for ORMs.  To get explicit updating to work properly the proxy returned from the ORM cannot be modified until the data is ready to be persisted.  Therefore either the proxy data has to be copied or the proxy has to be held while temporary fields are used to store the changes.  When the update method is called by the application the modified data is then flushed to the proxy.  This is more work than needed by implicit updating.  It also introduces maintenance issues. 


Another challenge of explicit updating is lazy loading of data.  Most ORMs allow for lazy loading of navigation properties, such as Manager on the Employee type.  While the arguments for and against are valid assume that lazy loading is valid in some cases.  For these cases the proxy must remain around to get lazy loading to work.  It is imperative that the proxy’s navigation property isn’t referenced until needed otherwise extra queries occur defeating the purpose of lazy loading.  Explicitly updated objects need special code to deal with lazy loaded properties.  Often this involves checking a private field to see if the data has been loaded and, if not, referencing the proxy property to trigger the lazy loading.  For Employee the domain instance can be lazy loaded when the Manager property is referenced the first time.  The domain instance would be a private field that is checked to determine if it has been loaded yet.


Even more challenging is changing a navigation property.  In most ORMs simply assigning a new value to the property would trigger an add to the database.  To update a navigation property requires the original value being pulled from the database and assigned to the property.  One solution to this problem is using slim objects.


Full vs Slim Objects


There are a couple of different contexts in which data is needed – view and edit.  When viewing data you generally want a full domain object where you are working with objects and can access all their information, such as a manager’s name.  When editing data generally only the root object is needed.  The UI generally deals with getting related data through other means, such as drop down lists.  ORMs are great for the view scenario but overkill for edit scenarios.  A slim object is a simple domain object that doesn’t have any navigation properties.  Instead it exposes the keys for the related objects.  This drastically reduces the amount of data being pulled from the database just to update an object. 


For example if a web application wanted to view the details of an employee then the manager’s name would be convenient to have through a navigation property.  But when editing the employee more likely the managers would be displayed in a dropdown list. The dropdown would be populated by retrieving a list of all employees who could be managers, storing the name as the text and their key as the value.  When editing the employee the manager’s key would be needed but the name is useless.  A slim object would contain only the primitive properties that needed to be set and the manager’s key.  To update the database the data model’s manager key column would be set while the navigation property would be ignored.  In EF this works well because it takes the key column over the navigation property.


There are some issues with slim objects to be aware of.  The first issue is that they would need to do the same sort of validation the full object would need to do, albeit with less information.  Otherwise the slim object, which is where you want validation to be, could be used to circumvent validation.  A simple solution to this is to have the full objects simply wrap the slim objects and the slim objects do the core validation.  But at this point when does a slim object really just become the data model?  This leads to the bigger issue with slim objects, exposing more of the underlying data model than desired.  In my opinion domain objects are really just convenience objects that provide common functionality over the existing data model.  If performance outweighs functionality then it is a reasonable tradeoff.


Surrogate Keys vs Persistent Keys


Persistence ignorance is a common term thrown out when talking about ORMs.  The goal is to have data models be PI but where does the key, as defined by the database, fit?  Is the database key really database specific?  I don’t think so.  Entities need some sort of unique identifier.  If we were working with memory objects then the address would probably work but with databases something more permanent is needed.  I don’t really see the benefit in trying to hide the database key other than to complicate things, with one exception.  The generation of the key should be PI but its usage should be encouraged in order to simplify working with objects.  As far as the application goes the key is some unique value that is associated with an object.  Whether that came from the database, some key generator or whatever is not relevant and, hence, PI.


So how do we support PI keys while still allowing the data model to work?  A surrogate key is generally used.  A surrogate key is some key generated for an object and somehow mapped to it for its life.  A simple sequentially incrementing number may do or something more elaborate may be needed.  Since the database already has a key why reinvent the wheel?  I think it makes a lot of sense to use the database key when it is available (it also makes detecting new objects easier).  But the problem arises on how to deal with new objects.  Assume that a new employee is added to the system but not yet saved.  The employee needs some sort of surrogate key to identify itself.  Without the key the new employee could not be properly validated to ensure they are assigned to a manager that is not a subordinate (assuming the key is used for detection).  One solution would be to assign sequentially decrementing values to the key until the object is formally added to the database and then changed to the database key.  But doing that means that an object’s key can change, at least when it transitions from new to existing in the database.  That may be ok for some applications but it might cause confusing behavior.  Even more dangerous is applications that use the knowledge of how keys are generated to implement some behavior. 


Given the problem with using database keys as unique identifiers it is no surprise that most people use surrogate keys that are completely independent of the database.  But this has its own challenges.  The first challenge is that if the code is being debugged to trace down a problem it can be confusing trying to map the domain object back to the real database object.  Another challenge is the mapping that has to happen between domain objects.  Going back to the Employee slim object, for example, the surrogate key would be used to associate the manager with the employee.  But at some point the surrogate key has to be mapped back to the actual database key, if any.  Some ORMs handle this automatically but it can be difficult to do the mapping while you are debugging your code if you don’t know how the ORM does its magic.


Yet another challenge with the surrogate mapping is cross context objects.  It is common in applications to cache commonly used data, such as countries and departments.  The surrogate/database key mappings have to persist for the life of the objects so where is it stored?  Certainly you wouldn’t want the ORM to store all the mappings forever as, over time, this would waste memory as objects are loaded and unloaded.  It would also be a bad idea for the ORM to generate new surrogate keys for shared data across context calls.  If this were to happen then if the same department was retrieved in two different contexts then it would have different keys.  This may or may not be a problem depending upon how they are used.  If the surrogate key is strictly to map back to the database key then everything is probably already.  But imagine if an application wanted to determine if the user had changed a value by simply comparing the surrogate keys, since they do uniquely identify the objects.  It wouldn’t work.  So the application has to be aware of how surrogate keys work to ensure they are not misused?  How is this any different than understanding how database keys work?  I’m not convinced surrogate keys solve the general problem any better than using the database keys.  They just introduce a different set of problems.


Repository/Query Pattern


The repository pattern has taken a beating of recent by the ORMs folks since the ORM technically already implements the pattern.  But what most ORM folks won’t clarify is that ORMs take it too far.  In a general ORM every table can be added, updated, deleted and queried.  But in most systems there are at least a core set of tables that are read only to the application, lookup tables for example.  The ORM won’t distinguish.  Therefore I think the repository pattern is useful to at least control the functionality available.  This doesn’t mean that you need to implement a concrete version of the repository, an interface is sufficient.  I don’t even agree that a generic repository interface is good as it is really no different than the ORM.  In my case I generally create an interface that exposes a core query method and, depending upon the table, a set of methods to add, update and remove entities.   The query method is the interesting part.  It generally accepts an interface that is specific to the repository and returns an enumerable list of domain objects.  The repository is responsible for mapping data models to domain objects and vice versa.  This is about the most generic I think a repository can get.


The query interface just contains a single query method so it can actually be a generic type if desired.  But I find that generic types are harder to read in code so I generally create a domain-specific query interface that inherits from the generic query interface just to keep it clean.  This also makes mocking the interface much easier.  Because the repository understands the query interface it becomes very easy to create any number of queries without touching the repository.  I generally use an extension class to add standard queries to the repository to simplify its usage.  For example I would probably create extension methods for IEmployeeRepsository that: got an employee given their key, got the subordinates of an employee or got the employees who could be managers.  Even better is that an application could create its own queries without having to touch the repository.  Normally extension methods aren’t easy to mock but since all the queries ultimately call the same method it is easy to mock the repository to return a fixed set of data for any query.


One of the challenges with this approach is efficiently querying the database.  It is unwise to pull all the rows from a table just to filter them in a query.  Therefore the ORM’s entity set has to be queried directly by query objects.    Since queries are data-level objects this makes sense.  A query is strictly for getting the right data so it should be able to query the underlying database and retrieve the exact data it needs.  The data model helps here because the query can use the data models without regard for the actual ORM implementation.  All that needs to happen is that the ORM entity set needs to be exposed.  This is where the challenge comes in.  For EF that is DbSet<T>.   Unfortunately using this type puts a hard dependency on EF.  Even worse is that this type isn’t testable making the queries harder to test.  EF does have an IDbSet<T> interface that can be used instead, and is mockable, but again there is a hard dependency on EF.  Ideally this should have been brought into the core framework but it hasn’t.  But it is straightforward to create an equivalent interface that does the same thing, is ORM agnostic and can wrap an ORM implementation so all is good.


But there are additional challenges with the entity set.  Often we want to pull a set of related data from the database rather than lazy loading it.  At least for EF this is a problem.  EF exposes this functionality off the entity set via the Load method.  But the method is implemented as an extension method meaning it won’t work for a custom entity set interface.  So you’ll have to reinvent the wheel.  Even worse is that the existing implementation relies on the entity set deriving from a specific type so you have to create a wrapper that uses the right object otherwise it won’t work.


One final challenge with using entity sets is LINQ itself.  Most queries will likely be implemented using it but the actual provider used is based upon the entity set (which is determined by the ORM).  As a result you can create queries, write unit tests to ensure they are valid and yet at runtime they will still fail because of the underlying LINQ provider differences.  There really isn’t any good solution to this problem short of using the ORM directly and providing all the infrastructure needed to unit test it.


Final Thoughts


This post has really been more about my experience with ORMs (at least EF) than technical content.  I just felt that the challenges I encountered had to be felt by more people using ORMs.  I still feel that ORMs are viable for applications that warrant their usage.  I don’t think they are useful for every application and I don’t believe they solve all the problems that traditional data access have.  I believe that a traditional data access layer would be more consistent, more easily testable without extra interfaces and simpler to understand.  But at the same time ORMs make it really easy to add additional entities later on without having to write a bunch of new code.  This reduces the long term maintenance and effort needed to support an evolving system.  Lazy loading, query generation and no SQL are certainly nice features to have.  Unfortunately ORMs don’t work with every database design.  Databases that are predominantly stored procedure based or use views a lot will end up challenging to implement in ORMs.  If nothing else the concepts that ORM use: repositories, UoW, queries, PI, etc are good features to use even when you cannot use an ORM.  I’ll continuing to use EF, and maybe NHibernate, for complex applications until the next generation of data access layers come along.