Any dependency injection (“DI”) framework does at least one thing: resolve the location of a piece of code so that other pieces of code do not have to know where it is. A classical DI framework like Spring.NET or Unity share a standard approach:
- Every service (a class that does something, and not only contains state) implements an interface.
- The exact type of the service – its name and assembly – and initialization parameters are added to a registry. While XML files are often used for defining this registry, there are other flavors as well; in any case, there is a central and explicit definition of what the application consists of.
- Everyone who uses the service exposes a dependency on it via its interface. A property of a constructor parameter is used by the DI framework to pass an instance of the service.
Some DI framework use attributes to define which property is relevant for the DI container, some don’t. Some use XML registry files, others offer a catalog definition by code. A variety of frameworks is available in production quality. So the choice of the perfect DI framework often becomes a highly subjective matter, largely influenced by a personal style. My favorite flavor is Spring.NET, no attributes, XML registry, and autowire by name.
Style aside, when a project has been developed with a dependency injection framework in place, it is likely that:
- It lends itself to unit testing. A DI framework promotes the idea of decoupling units of functionality. This makes writing tests after the implementation significantly easier. (Now let’s not argue if it’s better to write tests before implementation and accept that a lot of developers just do it that way.)
- There is decent separation of concerns. A key principle of DI is that you pass dependencies as interface implementations – this alone makes you think about what the correct interfaces are.
- You know what is inside the application. The registry can be read like the yellow pages of your application: what services exist, and even – if you stick to a verbose style without autowiring – where they are used.
Now comes MEF. With MEF, there is no registry; it just searches through all that is there and dynamically composes its catalog. In addition, MEF does not only let you use objects as the resource that can be located by the framework, but also property values, and even methods. This versatility makes MEF a hugely productive piece of infrastructure.
It makes the applying DI easy in situations where you don’t control the lifetime of objects. Think of WCF. WWF. Silverlight controls. With a classical DI framework, applying ServiceLocator is the only way out – MEF does not require any special patterns.
The application can be extended without changing a shared resource (the DI registry). This resolves a major pain point when you develop in an environment that you do not own – either something that explicitly offers a plug-in model, or when you provide customizations to a third-party system.
MEF is peer to peer Dependency Injection
The downsize however is that MEF does not enforce or encourage better testability, class design, or discoverability of functions. And this is where the smell wafts in: where a classical DI framework can act as a catalyst around which Good Code crystallizes. MEF does not require any coding patterns to support it, and hence does not have this potential.
But that’s not a bad thing either. Because what is gained is the ability to decouple the implementation of functionality and where it is located, and that’s the primary and ultimate goal of the inversion of control pattern. With MEF, this goal can be more easily achieved in scenarios that would otherwise be complex, or require support from additional patterns like Service Locator. Now it’s a common observation that people prefer things that are easy over those that are hard, it’s likely that you’re going to see more applications that utilize DI with MEF that otherwise would not.
DI frameworks exert a positive force on the code structure. But that’s not what they’re there for.
While the usage of a DI framework can lead to an overall higher code quality, there are much stronger drivers for these goals. The best way to get to an application that lends itself to unit testing is to write unit tests. For a testable class design, it does not matter that much if the first version of the class has been written in a testable fashion as long as developers are not afraid to refactor it while they are writing the tests.
In the role of a development manager who’s worried about poor separation, I’d recommend looking at the affluent and effluent coupling code metric: classes should have some dependencies and low complexity, otherwise they’re likely to do too much (or have no dependencies and a lot of types that are dependent on it: then it’s a low-level service). NDepend is a great tool for measuring this kind of metrics. NDepend can also find out MEF abuses for you; let’s say you don’t want methods to be used as exports.
And then there is one big advantage of MEF: it’s something like the batteries-included, official DI framework for .NET. This is quite powerful because it’s going to be covered in training materials, code samples, and maybe even developer certification tests. When you hire a developer into your product, you won’t have to train them on your specific flavor of framework, or worry if there has been any exposition to the technique at all.