[originally posted 5/8/2017]
One of the things I think about is how to avoid problems with dependencies. Specifically, ways to alter a codebase so that it is less likely to do goofy things performance wise, or clarity wise, by taking inappropriate dependencies. Dependencies you can’t afford or shouldn’t have at all are the leading cause of performance problems in lots of applications.
I’ve been look at Google Guice a lot lately and I’m not sure I like it. I mean, it seems to do what it says well enough, that’s not the problem. The problem is this: does deep dependency injection
actually lead you to a good place?
Here’s the thing — in Windows, and in Microsoft.NET for that matter, there are FAR too many global methods. Every one of those creates the possibility that any piece of code in any location might come along and call it. And it’s too darn easy to actually do it. So you get code in the bowels of your system that decides it really needs to (e.g.) write to the registry *right now* and that can be a very bad idea. Because maybe that code is being called in a context where any kind of I/O is just a really bad idea.
Dependency injection vs. globals does help you somewhat on this front, at the very least you can look at the code and see that it depends on say the registry or the file system or whatnot, without having to read every single line. And if you want a new dependency, you have to put it somewhere visible, that’s pretty good.
However, I’m not sure it goes far enough, protection-wise. The trouble is that your code could in principle ask for just about anything that is available in the appropriate binding scope and that’s often a ton of stuff. If you don’t use deep DI (that is, you use it only at the top level and only to get access to the global capabilities that the user agreed you should have) then you can reason a lot more easily about what any given method might do. This creates a certain built in friction to taking new dependencies (you have to wire up the constructors). This is kind of a good thing because new dependencies have to be more carefully considered. And while it’s true that it’s less typing with injection, it’s not clear that the cognitive load is less reading the code. It seems that a Guice system would end up with @Inject all over the place.
Is this actually going to lead to more performant applications and simplified testing? I’m skeptical. The “capabilities in your constructor” pattern is dumb as rocks and also leads to good unit testing patterns. It was wildly successful in the Midori codebase.
I feel like looking at the overhead of injection vs its benefits and looking at the long term maintainability and pit-of-success quotient (I totally just made that up) would be a worthy undertaking.