Ok, so I previously stated that adding ‘Impl’ to the end of your service implementation would get you on my ‘people to kill list’, and that was all I had to say.
Well, although I don’t have anything original to say on the topic, I feel like elaborating.
Disclaimer: if some points sound like lessons in egg-sucking, then that’s because some readers may be currently learning to suck eggs; I’m not necessarily assuming that you need to be told : )
I’m coming at this from a Java perspective – although it probably applies to many other languages too.
Java gives us concrete types, abstract types, and interfaces. Bear in mind that…
- If java supported multiple inheritance then we wouldn’t need interfaces – an abstract class full of abstract methods would make interfaces redundant if not for that shortcoming.
- You don’t need a Java Interface to have an interface. All classes with public methods have an implicit public interface which looks exactly the same as that of a Java Interface.
- At compile time, clients of an interface don’t care whether it’s an implicit public interface on a concrete class, abstract class, or an actual Interface type – the whole point of interfaces (and polymorphism in general) is that client code shouldn’t even need to be aware that they are, or aren’t, using an Interface / concrete class / abstract class.
- Some time back in the day, someone invented sliced bread. Since then, IDEs have been able to extract interfaces – or convert between class interfaces & actual Interfaces – at the click of a button.
Where does the ‘Impl’ suffix come into this?
If you have to name your class Impl, then
- You probably only have one implementation of that interface.
- You can’t think of anything sensible to call that implementation, which probably means that the java Interface itself doesn’t exist to specify any real interface – rather it exists only to facilitate some kind of trickery (AOP, test stubbing, etc).
- Or possibly, instead of point 2, you heard that it’s good to ‘program to interfaces’.
*If* you are publishing libraries to the wider world, then there are other things to consider that may make using interfaces more attractive. Otherwise:
Firstly, incase there’s confusion, ‘program to interfaces’ doesn’t have anything to do with Java Interfaces – or any other language equivalent. We’re talking about programming to those interfaces outlined at the beginning – implicit or explicit. The idea is that we don’t wan’t to couple our code by writing client code according to known implementation details. For the same reason that leaky abstractions should be avoided, we need to write code as if those abstractions are opaque.
Secondly, speaking of coupling, it has nothing to do with compile-time dependencies. We’re talking about conceptual coupling. If we program to interfaces it’s easier to avoid coupling client code to details of particular implementations of that interface (implicit or explicit). If client code ends up working on the assumption that it’s collaborator does a particular dance, then all future versions of that same implementation, or alternative implementations, must necessarily do that same dance if they don’t want to break that client code.
So why do I care if your Interface isn’t a ‘real’ one?
1) Given that we have (with Java) both abstract classes AND Interfaces, then it makes sense to use them with distinct semantics whenever possible in order to maximise the richness our language provides. Interfaces are perfect for ‘mixins’ – that is to say that Interfaces are perfectly suited to define roles. A concrete class can ‘be’ one thing, while performing a particular role at the same time. For example, the Comparable Interface allows any class to be whatever it happens to be, while also performing the roll of something that can meaningfully be compared to other class instances.
Abstract classes (or any class open to extension), on the other hand, specify that a type ‘is a’ thing – even if the finer details of that thing are unspecified.
We have two different methods of achieving the same end, but with meaningful semantic differences
2) Closely related to the first point, every time you ‘misuse’ an interface you hide the meaning of real interfaces. That is to say that diluting your real interfaces with artificial ones makes your overal codebase harder to reason about – whether that’s due to aninterface that doesn’t specify an actual role, as previously described, or an interface that only has a single implementation – especially when it will most likely only ever have one implementation. Martin Fowler briefly touches on this here: InterfaceImplementationPair
Test implementations don’t count.
3) Adding an Interface (or abstract class) adds complexity and indirection to your production code. Complexity and indirection comes with cost. Why incur a present cost (that’s an immediate cost plus an ongoing opportunity cost relating to both complexity and misleading APIs) without any presently added value?
Why don’t test implementations count?
Because tests are supposed to make your production code better – not worse. We don’t want side-effects of testing considerations to misleadingly imply meaning in production code where none exists.
It’s 2015 and we have awesome frameworks such as Mockito which can stub concrete classes if we need. That aside, if we’re writing nice modular composition-based code and following the Single Responsibility principle, then it’s typically very easy to stub through extension for nice fast tests.
Coming soon: arguments against common, but terrible, reasons to ‘aways use an interface’.