I’m finishing up my company’s second rewrite of an ASP.NET application to MVC and Razor. One of the things we learned along the way is that controllers and views are no more testable than forms and code behinds in ASP.NET. So we have been using the orchestrator pattern for newer code. I’d like to share our thought process on this approach.
First a definition of the orchestrator pattern. The pattern has been discussed by several different people and mentioned in at least one book but my definition of an orchestrator is something that manages the flow of data from one point to another. Here’s some links on the topic:
For MVC an orchestrator is responsible for getting data from the back end and sending it to a view for display. The orchestrator is also responsible for getting the data from the view and passing it to the back end for validation and saving. A typical flow in this type of application would look like this:
- ASP.NET calls an action on a controller
- Controller collects any needed data (generally the parameters to the action) and calls a method on the orchestrator
- Orchestrator validates the data and passes it along to the service layer for processing
- Service layer performs whatever action is necessary to meet the request and returns a result
- Orchestrator does any post-request processing on the result and returns a (potentially updated) result
- Controller uses result to return appropriate action result to ASP.NET runtime
A controller in MVC is really an orchestrator but unfortunately it is too tightly coupled with the runtime to be able to focus solely on data flow. One could argue whether a controller is testable or not. Yes there are folks who test it and yes there are ways to make it testable but is the work involved really worth it? I don’t believe so. Let the controller do what it is designed to do best – handle routing requests from the runtime and return the results back to the runtime. This allows the controller to make full use of the MVC infrastructure to get data from the route and views, cache data where needed and respond to requests without having to introduce any unneeded infrastructure such as an IoC.
One of my fundamental rules in regard to unit testing is that I should be able to architect an application with the best design possible without regard for hooks to make unit testing possible. What this means is that I’m not making every method of every type virtual so it can be mocked. I’m not exposing everything as an abstraction so I can replace it in a unit test. This type of mentality produces very loosely coupled code that is very easy to test, hard to understand, slower than necessary and ultimately harder to maintain. Of course if the design mandates a loosely coupled situation then by all means use it but never just to make unit testing easier.
I’ve already mentioned before that unit testing as it stands today is not where we need it to be. I am confident that testing frameworks will step up to the plate and allow us to design applications without regard for unit testing and frameworks will just work. I think we’re getting closer to this goal. With the addition of Fakes to Visual Studio 2012 we can now stop worrying about things that aren’t testable (static methods of static classes) because the testing framework can still circumvent it. This is just one example of designing without testing in mind.
Going back to the orchestrator pattern, when I test the data flow from the view to the back end I don’t want to concern myself with how the data got there. It just did. Whether that was a controller and ASP.NET or the test framework shouldn’t matter to the orchestrator. It just works. By eliminating the need to test a controller I can keep focused on the important aspects, the orchestrator.
The controller’s job is to collect the data from the ASP.NET runtime and pass it on to the orchestrator for processing. When the orchestrator is done it returns back data that the controller then passes on to the view for rendering. The controller is managing the data exchange to and from ASP.NET while the orchestrator is doing the actual work. If we need to pull in extra data that is not accessible from the view (ie. user context or perhaps configuration settings) then the controller is responsible for doing that as well.
Currently we have found that having a single orchestrator for each controller works well. Not every action on the controller needs a corresponding orchestrator method but we are seeing a pattern where this is generally true. Each action basically becomes boilerplate code where data from the view (generally the action methods) is passed to a method on the orchestrator. The results of the orchestrator are then passed on to the appropriate view. The controller only needs additional code when the action requires extra work. For example if an action can only be called by certain users then an attribute or some code in the action will enforce that. The orchestrator is not responsible for that aspect.
This nicely corresponding to code that is generally unit testable and that which is not. Unit testing that an action is getting the right data generally doesn’t make sense as we rely on the framework to do that. The process however is testable. Correspondingly we put code that either doesn’t need to be unit tested or can’t easily be unit tested in the controller and the rest moves to the orchestrator. Orchestrators are ASP.NET agnostic so all data needed by a method must be passed to it. We have set up a base controller class such that the controller creates its orchestrator the first time an action method needs it. When the controller goes out of scope so does the orchestrator. Most of our action methods are 2-3 lines long and none of them require unit testing. Any actions longer than that force us to take a closer look to see where we can refactor the code.
The controllers reside in the main MVC application. This project is considered not testable so nothing goes in it that we want to test.
The orchestrator is really where the bulk of the controller work is handled. Since the orchestrator does not have any connections to ASP.NET we can more easily test it. But even the orchestrator is generally pretty simple because it is mainly responsible for invoking the service layer where the real work is done. In a traditional MVC application the controller will call the service layer directly. But there is often some work that has to happen before or after a service call. For example service layers generally work with domain objects while the controller works with view models. Where does this translation take place? In a traditional application the controller would need to do that. With the orchestrator pattern it can be moved to the orchestration layer where we can also test the translation. Some models are easy to translate but some require more work (ie.concatenating an address together to form an easily displayable string).
For the projects I’m working on we start with all the code in an orchestrator method and move code that cannot be easily tested (anything related to ASP.NET for example) to the controller. Things that can’t be tested generally show up easily as we are writing our tests. The orchestrators reside in an orchestration assembly. The orchestration assembly is used by the main MVC application and has a dependency on the service layer. This keeps it pretty clean.
Different companies have different opinions about what role view models play in MVC. Some companies believe they should only be used for display while others use them as domain objects. As a result some companies put the view models in the MVC application while others put them in the business layer. There are advantages and disadvantages to both approaches and I don’t believe either are wrong. I believe in keeping models as close to where they are needed as possible.
For the orchestrator pattern I think it makes sense for the orchestrator to work with view models. The translation between view and domain models is testable so the view models must reside in a place that can be tested. Therefore our view models reside in the orchestration assembly so we can test any complex mappings we may have. We’re not sure if this is truly the best approach yet but it is working well for the projects we’re using it with. The orchestrator is responsible for getting the domain objects from the view models it receives, calling the service layer with the domain objects and then translating the returned objects back into view models.
The view models are dumb collections of properties with no business logic. Only the properties needed for a specific view are provided so they stay small. We add some formatted properties to simplify our view code but that is about it. The hardest part is translation to and from the domain object. Domain objects reside in the service layer and the service layer has no knowledge of the orchestration assembly. For translation from a view model to a domain object we either expose a method off the view model directly (ie. ToDomainObject) or we create an extension method off the domain object to accept a view model (ie. FromViewModel). One could argue that this is mixing concerns but not everything is cut and dry in software. I tend to lean toward the extension method route but then you end up with lots of one method extension methods (one for each domain object).
For translation from a domain object to a view model generally a constructor (on the view model) is used. Again this is mixing concerns but it is far easier than creating an extension method and keeps the translation with the code that actually relies on it. The downside to this approach is that the MVC application will now have to have a reference to the domain objects for compilation. If this is a concern then extension methods might be the better route.
The service layer is a traditional service layer where data is validated, business rules are enforced, processes are run and changes are made. The service layer remains agnostic of the orchestration assembly and deals only with domain objects. The service layer is shared by many orchestrators. One of our guidelines is to add functionality to the orchestrator first. If more than one orchestrator needs similar functionality then we “promote” it to the service layer. This prevents our service layer from becoming a hodgepodge of one-off methods needed by only one orchestrator.
So far our use of the orchestrator pattern has been really successful. We might not be following a true MVC or OOP architecture but our code is solid, testable and easy to maintain. New developers can be given a quick overview of the pattern and they can easily grasp the concepts and start working. If a bug appears in the application we generally start with the orchestrator. If we find a problem with getting the right data we know it is a controller/ASP.NET issue. Otherwise we head toward the service layer. Overall we are happy with the pattern and will continue to use it going forward. I encourage you to check out the pattern if you have not yet used it. Especially if you are struggling to get your MVC apps tested properly.