Sunday nugget (or facepalm) 001: Jenkins != Continuous Integration

This shouldn’t need to be said, but it does.inconceivable

Unless you integrate your code continuously (i.e. at the very least every day), then you are not doing Continuous Integration – regardless of whether you have a Jenkins/Hudson build run on commit, or not.

 

Posted in Uncategorized | Leave a comment

Baby steps towards agile development 001:

spaghetti1) Quality

Disclaimer: This stuff applies 99% of the time.

You can’t exhibit agility in your software development if you write poor quality code. The same goes for tests (maybe I’d better make that step 1.5!).

A primary facet of agile software development is the agility to change direction quickly. You can’t do this when change is too expensive. Poor quality code makes extension difficult and therefore costly. Poor quality code makes refactoring very difficult, and therefore costly. Poor quality code can make automation harder (we’ll get to that later), and therefore costly.

Poor quality code can also make it more difficult to ensure that you have working software as much as possible (it’s often good to be able to demonstrate what you’ve got so far to the client, and to get feedback based on that). Your team should all be familiar with, and understand SOLID principles & the benefits of their appropriate application.

1.5) Unit Tests

Disclaimer: This stuff applies 99% of the time.

I shouldn’t really have to say this, but… Write tests! and make sure they’re good ones. You can’t refactor fearlessly without a comprehensive set of automated unit tests, as a minimum, functioning as a safety net.

The agility to change rapidly means the agility to refactor without fear. You ideally need tests to cover every required unit of functionality, and those tests need to be good quality tests – poor tests are worse than no tests as they cause fear, uncertainty & doubt. At best, poor tests are an unnecessary maintenance cost – often they are far more dangerous in the way that they mislead.

A Test Driven approach to development (TDD) is often ideal – done correctly you should only ever have code that has a specific reason to live, and that code will be written in a testable way, and will be tested out of the box.

You should never need to go to bed at night worrying whether you’ve broken something!

Bearing in mind that unit tests constitute a maintenance cost, there can be such a thing as too many tests (as well as overly fragile tests, etc, which I won’t go into here) – something I’ve heard referred to as ‘TDD OCD’. For example, if a particular ‘unit’ of functionality has no branching logic, then it does not need it’s own test – it is perfectly acceptable to test that unit indirectly as part of another, broader test.

See http://osherove.com for some good unit testing resources.

If you don’t have good  test coverage (and by that I mean that you actually test – rather than just cover) then you are almost certainly not going to be agile in any meaningful sense of the word.

Imagine an acrobat – with a safety net he can show off his amazing agility to it’s fullest, and should he fall he can simply climb back up the ladder and try again. Without a safety  net he might show amazing agility right until he falls and dies. Development without such a safety net is equally reckless, and in reality that means that any real agility will not be realised, due to  fear.

Posted in coding, software | Leave a comment

Agile is for losers…

…real men embrace agility.

The development world seems to be full of ‘Agile’ pretenders. If you were to object to my saying that ” there is no such thing as ‘Agile’ ” then I’m sorry to say that there’s a reasonable chance that you too fall into this category. Let me explain…

There is no ‘Agile manifesto’. There is a ‘Manifesto for agile software development’, which lays out some general principles which one might like to follow in order to help them develop software with agility. Of course it seems entirely reasonable to shorten ‘agile software development’ to simply ‘agile’, but herein lies the same problem we see in many other aspects of our day to day lives – words are allowed to be abused and end up losing any sense of meaning. Worse still they end up meaning something entirely different than they did to start with – for example, the word ‘trolling’ as is commonly used in the mainstream media these days has almost no resemblance to its original meaning. We’ve lost a useful word. The ‘agile’ community has suffered an even worse fate as the word ‘agile’, across sweeping parts of the industry, no longer has any relation to the concept of agility. Not only have we lost a useful word – we’ve managed to pervert the philosophy of agile software development as it is understood in many people’s minds.

We’ve reduced agility to mindless process, wrapped up in a pretty new vocabulary. An alert mind will spot the obvious paradox and endeavour to educate themselves to correct their misunderstandings. A keen but less alert mind will suffer the pain of contradiction and endure the inevitable failure (or at least a hefty dose of strife) as long as it takes them to realise that something is wrong; At which point they will deduce either that ‘Agile’ is a fraud, or that they too need to educate themselves so that they might do it properly.

A worrying but very common (in my experience, at least) trend is that of a third kind of mind. This mind persists in having faith that such a prescriptive process (invariably involving moving cards around a board) followed obediently, despite it’s apparently voodoo nature, will yield some kind of magical gain in productivity. These people suffer just like the other 2 kinds I’ve mentioned – but this time they are doomed to forever suffer the pain caused by trying to resolve contradictions, thanks to their stubborn belief in magic.

So, lets see if you’re an agile pretender …

Posted in Uncategorized | Tagged , , | Leave a comment

Story Points, Bugs & asking yourself “To What End?”

Some people like to invent complication. I’ve noticed this a lot when it comes to agile software development among teams that are following the fashion without really understanding what they are doing, or why they are doing it; and it doesn’t help that since seventeen or so guys met up in a ski resort in Utah over ten years ago, a whole cottage industry has sprung up around agile coaching & training – of which I’m certain a significant number lean more towards being snake-oil salesmen selling ‘Agile’ the noun (as Dave Thomas – an original signatory of the agile manifesto – laments here).

I’ve got a few ‘Agile’ items in my backlog to moan about later, but for now I wanted to talk about a particular irritation – story points & bugs. I keep finding myself in discussions with people asking what is the proper way to estimate bugs? Often this apparent difficulty stems from the idea that story points represent business value deliverable to the client – and as such we shouldn’t assign story points to bug fixes because they are mistakes, rather than value to the client and that our velocity should only reflect our ability to deliver value to the client, rather than our ability to clean up after ourselves when we mess up. Here are a few quotes for clarity:

“Our theory is that points, and our resulting velocity, should demonstrate business value to our customers. Customers shouldn’t “pay” for bug fixes. The product should be of a valuable quality.”

“Hone skills for collaborating with the customer. Story points represent what the customer values. Better to hone the skill of writing and estimating stories, as the vehicle for delivering that value, instead of trying to figure out the convoluted ways to write a story about fixing a memory leak.”

“I don’t assign story points to defects since I think story points are a measure of the team’s ability to deliver value in an iteration. Fixing defects, while important and valuable, is NOT delivering value…it’s a drag on the team’s ability to deliver value. The more the quality is built in –> the less defects arise –> the more bandwidth the team has to deliver new stories –> the velocity should be seen to increase. Assigning story points to defects, I think, pollutes this.”

“…fixing defects, while important and valuable, is NOT delivering value…insomuch as value is measured as quality product in the hands of customers. If we’re using story points to measure our effort to make ANY changes to the product, be it new functionality or fixing defects, then have at it and assign story points to defects.

If we’re using it as a measure of a team’s ability to deliver high quality product into the hands of the customer (my preference), then no, don’t assign story points to defect work.”

 

I think that the common theme here that underlies the misunderstanding of story points is the apparent misunderstanding of the term ‘business value’. I think that this is often the case because people have failed to ask themselves the most important question anyone should always ask when developing software – to what end? Perhaps the dark side of the previously mentioned cottage-industry preys on those that are quicker to buy a productised – step by step – solution than they are to wonder: ‘to what end?’ and in the process grok the rationale behind agile development methodologies and principles. Give a man to fish and he’ll eat that evening; Send him on an Certified ScrumMaster course and he’ll progenate a team of jaded and cynical developers who feel no joy in their supposed agility – thanks to the nagging dissonance associated with constantly trying to resolve contradictions which needn’t exist if they only knew ‘to what end?’ they were adopting these methods & techniques, and with that understanding were able to choose and apply them appropriately.

 

Hold on – ‘Bug’ is a pretty ambiguous term…

Yes it is. So to put this post in context, we’re talking about generally well defined defects that have managed to be introduced in a sprint and have found their way outside despite the associated stories being ‘done’. We’re talking about bugs that should have been caught, as opposed to the kind that only manifest themselves when a butterfly flaps its wings a certain way.

Other kinds of bugs include those of similar origin, but which aren’t well understood or defined – these would likely require a spike as it would make no sense to relatively size a poorly understood problem. Although there’s no reason that spike couldn’t spawn point-estimated task if it was worth doing so.

One last scenario worth mentioning is bugs not introduced during a sprint, but the kind that might exist in a backlog to address defects in an existing legacy system. Lets assume that these are all well understood by now so that we don’t need spikes. Should we assign story points to these? No – these probably shouldn’t even be part of the Scrum (i’m assuming Scrum as it’s pretty ubiquitous these days) paradigm – particularly if they are all individual and discrete issues (as they typically are). Why on earth would we think that Scrum or even ‘Agile’ (the noun) is particularly appropriate? To what end would we introduce such a framework?

That’s the subject of another rant I’ll get around to sometime, but to cut a long story short we use agile methods and techniques to achieve a state of agility in the business:

  • Agility to respond to change in a dynamic global market.
  • Agility to fail fast and fail cheap.
  • Agility to minimise time to market in order to maximise return on investment.

 

Those kinds of bug fixes, as discrete individual items, don’t participate in the tight feedback loops that facilitate such agility in the first place. How will Scrum, when fixing these defects, help us to achieve those attributes of agility?

 

To What End?

Story points exist to decouple our estimation process from units of time. It’s as simple as that. Why?

We live our lives in hours and days, and our sprints (if we’re using something like Scrum) are measured in aggregations of those days. Our releases are probably planned in aggregations of those sprints, etc. These are all fixed measurements and their relations, in a quantitative sense, always remain fixed relative to each other (except for leap years perhaps!); The problem that this poses is that it makes it incredibly difficult to estimate accurately (in hours for example), over an extended period of time with all kinds of changing factors which influence our ability to deliver value to the client, just how much value we can deliver in any given timeframe.

If we can decouple our estimations from units of time we can decouple our ability to give useful estimates from the factors that effect the time it takes to deliver – team alterations, changing product complexity, increasing productivity with familiarity, physical environmental factors, or anything else you might imagine.

By sizing stories relative to one another, using points, we can measure our velocity – our ability to deliver value to the client – in a way which is self-correcting without ever having to estimate anything other than how large one story is compared to another.

Different people in different teams can all easily estimate, with reasonable accuracy, the size of one story compared to another – regardless of how long it would actually take each of them to implement those stories compared to their colleagues. With a little effort and discipline we can have discrete teams with different velocities, all normalised according to a common understanding of story point allocation; This grants a product owner otherwise unimaginable flexibility when it comes to planning and allocating stories to multiple teams.

 

So this is the ‘end’ I alluded to when we wondered what story points were used for. So to what end would we conclude that story points shouldn’t be used to estimate bug fixes?

 

Business Value

Simply put, business value is something that the client wants. Something that the client cares about. Something they value.

The client doesn’t care whether you need to spend time upgrading your development machines to the latest OS version, or whether you need to spend 15 minutes every day rebooting your flakey git repo, or whether you need to spend more time testing because quality is slipping (a new screen roughly similar to a previously built screen is worth roughly the same to him whether you spend twice the time or half the time).

Your client DOES care about getting stories delivered, and your client DOES care about emergent bugs being fixed (hence bug fixes do provide business value – otherwise your client wouldn’t care whether you fixed it or not).

With this clearer understanding of business value, is there any end to which you can imagine that not assigning story points to estimate bugs will be beneficial? Lots of people like to assign a spike instead. The trouble with this is that (in the context which I set out earlier) the bug definitely needs to be fixed, and spikes are by definition time boxed efforts (which implies that bug resolution is optional). Secondly, which ever word you use to describe the allocation of resource involves an implicit conversion from time to story points – otherwise how do we know how many points fewer to commit to this sprint? The point of time-boxing spikes is that it makes no sense to try to size them relative to the usual stories; If it does make sense to compare them to the size of stories, then it almost certainly shouldn’t be a spike. The only reason to do so would be to accommodate this faulty ‘bug fixes aren’t business value’ logic.

 

How I would account for Bugs

In the context outlined earlier, I like to keep it as simple as possible unless there is a compelling reason not to. I would size the bug relative to other stories as usual – lets say we think that it will be roughly 2 points. It’s important that we assign these two points because they represent work that needs to be done, which means that there is 2 points worth of other work that won’t fit into the sprint.

We also need to remember that the original story from which this bug leaked (if we can pin it down like that) was estimated with a definition of done which, presumably, implied that the deliverable was bug-free. The fact that we’re now addressing that bug means that the original story isn’t in fact complete, and if we kept our velocity as it is we would go into our next sprint estimating how much work we can ‘nearly finish’, rather than actually finish (especially if this bug leakage is becoming habitual – otherwise in a one off situation this may not be worth the bother). So I’d reduce our velocity by 2 points, for example, too. That means that this sprint we’ll actually plan to deliver 4 points-worth fewer stories, and then going forward our velocity will be 2 points less until such time that we stop leaking bugs and manage to gain some velocity back.

 

Posted in software | Tagged , , | Leave a comment

Java lacks multiple inheritance of implementation because it’s not necessary?

spoon

Java itself isn’t ‘necessary’. But it is useful.

Is multiple inheritance (of implementation) dangerous? Well, even a spoon can be dangerous in the hands of a moron.

How about “Because there is a better way without multiple inheritance.” – spoken absolutely?

Lets look at this example of the observer pattern which I’ve roughly taken from an old Bob Martin document…
We have a Clock which is a self contained class which understands the way in which time is measured and represented. It ticks every second. We’d like to keep it self-contained and with a single responsibility.

We also want to implement an Observer pattern, whereby objects can register themselves as Observers of the clock so that they receive a notification whenever the clock updates it’s internal representation of the time (we don’t want to continuously poll the clock, wasting cpu cycles, when the time will only ever be updated once every second).

Given that Java doesn’t support multiple inheritance of implementations (ignoring some blurred lines introduced by Java 8 just recently), an implementation of the Observer pattern would have to look roughly like this:

public class Clock {

   public void tick() {
      // Update time each tick.
   }
}

interface Observer {
   public void update();
}

interface Subject {
   public void notifyObservers();
   public void registerObserver(Observer observer);
}

class SubjectImpl implements Subject {

   List<Observer> observers = new ArrayList<>();

   @Override
   public void notifyObservers() {
      for(Observer observer : observers) {
         observer.update();
      }
   }

   @Override
   public void registerObserver(Observer observer) {
      this.observers.add(observer);
   }
}

class ObservedClock extends Clock implements Subject {

   private Subject subjectImpl = new SubjectImpl();

   @Override
   public void tick() {
      super.tick();
      notifyObservers();
   }

   @Override
   public void notifyObservers() {
      subjectImpl.notifyObservers();
   }

   @Override
   public void registerObserver(Observer observer) {
      subjectImpl.registerObserver(observer);
   }
}

 

Now compare the above code with the implementation using a hypothetical Java which supported multiple inheritance of implementations:


class MIObservedClock extends Clock, Subject {

   @Override
   public void tick() {
      super.tick();
      notifyObservers();
   }
}

class Clock {

   public void tick() {
      // Update time each tick.
   }
}

interface Observer {
   public void update();
}

class Subject {

   List<Observer> observers = new ArrayList<>();

   public void notifyObservers() {
      for(Observer observer : observers) {
         observer.update();
      }
   }

   public void registerObserver(Observer observer) {
      this.observers.add(observer);
   }
}

 

Now… did writing that code unleash any zombie apocalypse? Which version is cleaner? Which is more elegant? Which is more decoupled? Which is ‘better’ ?

I’ve heard ‘Uncle’ Bob refer to Java interfaces as ‘a hack’, made for the sake of simplicity – but not for us. For the sake of the JVM implementation. I don’t know, but it’s probably true.

 

TL;DR?

Java interfaces are dumb.

Posted in coding, java, software, Uncategorized | Tagged , , | Leave a comment

Java is always pass by value

Ok, so this is explained ad infinitum elsewhere on the net. It is also, however, a question (is <some language> pass by reference or value?) asked over and over again by newbies.

I’m not going to explain the differences between the two (or 3!) for newbies right now, but what I’m more interested in talking about is whether it is right to refer to Java as being ‘pass by reference’ or ‘pass by value’ – on the assumption that we all know what the concepts involve.

In the Java community it is understood that Java is ‘pass by value’, because Object method arguments (we’ll ignore primitives) take the form of copies of the values of Object references. I.e. another reference to the same object. A method parameter is never the actual Object reference supplied as an argument.

In the Ruby community (I think) the exact same semantics are considered to be ‘pass by reference’, because what you’re actually passing around is always a reference to the Object argument – never the actual Object itself.

So who’s right?

Now, I seem to be constantly telling people that words exist to describe reality – they don’t define reality. And for this reason I’m happy with both the Java & Ruby conventions, as they both make sense given some context. But… there are 2 reasons why I believe that ‘pass by value’ is more precise and useful, and 1 reason why it’s less useful.

In both of these languages, method arguments are eVALUated first, and the result of that evaluation forms the formal method parameter. I believe that the ‘by reference’/’by value’ language is originally compiler terminology and pre-dates both languages, and in such usage ‘by reference’ would refer to the situation where an expression is not evaluated first, and the actual reference is used. For this reason I believe that the Java camp is using the terminology more precisely.

Secondly, differentiating between reference & value allows us to distinguish between standard Java/Ruby semantics, and the ability to actually pass pure references in languages such as C++ (using the & symbol). Clearly this is an important distinction where such references are mutable for the same reasons that it would be important to note whether your local key-cutter was able only to copy house keys – or whether he was also able to alter a key so that it could open someone else’s house! So again the Java communities choice is more meaningful in that sense.

So here’s the downside of describing Java as ‘pass by value': doing so, obviously, obscures the fact that Java always passes Object arguments as references, and never the actual Object itself. It doesn’t really matter for the most part – but it’s not helpful to new programmers.

So we need a third term! ‘Pass by object sharing’.

Barbara Liskov, who put the L in SOLID, coined the term to avoid these difficulties. ‘Pass by sharing’ describes the semantics without implying anything about the underlying mechanism.

So if someone ever asked be in an interview (I should hope not by now) whether Java was pass by reference or value? I could say ‘yes’. More probably I would say that according to the wider Java community it is ‘pass by value’ – because arguments are evaluated before being used as formal method parameters – but I prefer the term ‘pass by object sharing’.

Posted in coding, java, software | Tagged , , , | Leave a comment

Micro-embedded Java DSL with fluent interface

Here’s a teeny-weeny embedded DSL used to perform password validation. It was created to rationalise existing validation routines as part of a wider piece of work – so it’s perhaps better, but not perfect.

 

Here’s the client code making use of it’s fluent interface:

private static final PasswordValidator SECRET_STRENGTH_VALIDATOR = new PasswordValidator()
  .mustContain().atLeast(SECRET_WORD_MIN).and().noMoreThan(SECRET_WORD_MAX).charactersInTotal()
  .mustContain().atLeast(1).numbers()
  .mustContain().atLeast(1).ofTheseCharacters(ALPHA_CHARS)
  .mustContain().atLeast(1).ofTheseCharacters(SPECIAL_CHARACTERS_STRING)
  .mustOnlyContain().anyOfTheseCharacters(ALPHA_CHARS + SPECIAL_CHARACTERS_STRING).or().numbers().andNothingElse()
  .mustBeMixedCase();



Lets for now ignore the fact that this isn’t injected into the dependant client components – there’s only so much refactoring that one can or should attempt in one sitting.

I like this because it reads pretty much like an English sentence (not that ‘English’ or ‘French’, etc, is what the ‘Language’ stands for in DSL, as an old colleague of mine points out here – but in this particular case I think it’s a nice read-ability bonus), rather than a series of methods which must be read through in their entirety, or worse – a big old uber-method which does all of the validation in one hit, before it’s possible to know exactly what the particular validation rules are.

Of course all of that validation code still exists, but now it’s hidden away in validation rules which are constructed according to however the client code makes use of the possible combinations provided by the DSL, and executed when the validate() method is invoked on the validator. In this way many combinations of validation rules can be applied to different validator instances in a self describing way, using minimal extra code to do so.

So lets see where the complexity has been hidden:

public class PasswordValidator {

  private List rules = new ArrayList();

  public boolean validate(String passwordString) {
    char[] password = passwordString == null ? new char[0] : passwordString.toCharArray();
    boolean result = true;
    for(PasswordValidationRule rule : rules) {
      result = result && rule.execute(password);
    }
    return result;
  }

  public MustContainRule mustContain(){
    MustContainRule rule = new MustContainRule(this);
    rules.add(rule);
    return rule;
  }

  public MustOnlyContainRule mustOnlyContain() {
    MustOnlyContainRule rule = new MustOnlyContainRule(this);
    rules.add(rule);
    return rule;
  }

  public PasswordValidator mustBeMixedCase() {
    rules.add(new PasswordValidationRule() {
      @Override
      public boolean execute(char[] password) {
        String passwordString = new String(password);
        String upper = passwordString.toUpperCase(Locale.ENGLISH);
        String lower = passwordString.toLowerCase(Locale.ENGLISH);
        return (passwordString.equals(lower) | passwordString.equals(upper)) == false;
      }
    });
    return this;
  }

}



Ok. So there isn’t anything too complicated here; just a validator with 3 kinds of validation configurable – mixed case, must have something and must ONLY have something. The mixed case requirement is handled here directly as it is a complete requirement in it’s own right. The other two kinds of requirement are meaningless without some further input (what exactly is it that we must have? Must we have a certain number of that thing? etc). You can see that the rules are constructed to a PasswordValidationRule interface (which specifies the execute() method).

Here’s where those extra details are constructed (I won’t show the must ONLY have option – it’s the same but slightly more simple):

public class MustContainRule implements PasswordValidationRule {
  private String chars = null;
  private boolean numbers = false;
  private int from = 0;
  private int to = 0;
  private MustContainRuleDetailAppender detailAppender;

  MustContainRule(PasswordValidator passwordValidator) {
    this.detailAppender = new MustContainRuleDetailAppender(this, passwordValidator);
  }

  public MustContainRuleDetailAppender atLeast(int num) {
    from = num;
    return detailAppender;
  }

  public MustContainRuleDetailAppender noMoreThan(int num) {
    to = num;
    return detailAppender;
  }

  @Override
  public boolean execute(char[] password) {
    boolean isValid =
         validateParticularCharacterRequirements(password)
      && validateOverallLengthRequirements(password)
      && validateNumberRequirements(password);

    return isValid;
  }

  private boolean validateParticularCharacterRequirements(char[] password) {
    if(chars == null) {
      return true;
    }

    int count = 0;
    for (char passLetter : password) {
      count += StringUtils.countMatches(chars, String.valueOf(passLetter));
    }

    return validateCount(count);
  }

  private boolean validateOverallLengthRequirements(char[] password) {
    if(chars != null) {
      return true;
    }
    return  validateCount(password.length);
  }

  private boolean validateNumberRequirements(char[] password) {
    if(numbers == false) {
      return true;
    }
    int count = 0;
    for (char passLetter : password) {
      if(Character.isDigit(passLetter)) {
        count++;
      }
    }
    return validateCount(count);
  }

  private boolean validateCount(int num) {
    boolean pass = from > 0 ? num >= from : true;
    pass = pass && (to > 0 ? num <= to: true);
    return pass;
  }

  public class MustContainRuleDetailAppender {

    private final MustContainRule mustContainRule;
    private final PasswordValidator passwordValidator;

    private MustContainRuleDetailAppender(MustContainRule mustContainRule, PasswordValidator passwordValidator) {
      this.mustContainRule = mustContainRule;
      this.passwordValidator = passwordValidator;
    }

    public MustContainRule and() {
      return mustContainRule;
    }

    public PasswordValidator ofTheseCharacters(String str) {
      mustContainRule.chars = str;
      return passwordValidator;
    }

    public PasswordValidator charactersInTotal() {
      return passwordValidator;
    }

    public PasswordValidator numbers() {
      mustContainRule.numbers = true;
      return passwordValidator;
    }

  }

}

 

And that’s it.
Here we can see that this more complicated rule makes use of a public inner class – this is required in order to limit the client code to make only valid combinations of validation rule detail, such that the client code is only able to construct a single validation rule, in it’s entirety, at a time. This keeps the validation rule construction code simple (and therefore better tested) and the client code is forced to be written in an easy to understand way. This is achieved by controlling which methods are available depending on which object type is returned from each method, and therefore the methods which are then available in the chain; as such being forced to use the api in a specific way is not a bad thing as it actually limits the client code to invoking only a few sensible options at a time. For example, after

  .mustContain().atLeast(SOME_NUMBER)

the client code only has the option of invoking and() in order to add a numerical limit to that same validation rule currently being constructed, ofTheseCharacters() to specify the character set of which SOME_NUMBER must be used, numbers() to specify that SOME_NUMBER of numbers must be present, or finally charactersInTotal() to indicate that the there must be at least SOME_NUMBER of characters total in the password. Each of these methods, apart from and(), returns the original validator object which means that any further validation criteria must exist as another ValidationRule object. Because the MustContain and MustOnlyContain rules are so tightly coupled with their ….DetailAppender classes they have been implemented as inner classes; they’re effectively part of one larger class, but using this technique it’s possible to simulate a kind of context dependant method accessibility (unless the client code decides to deliberately break the mechanism by not chaining their invocations).

DSL_Methods

Isn’t this more complicated?
In this case I feel like I can justify the marginal one off increase in complexity for the reusable simplicity and readability. All validation will be variations of the same theme – so it feels right to me to standardise the mechanics of those variations.

The DSL apparatus itself is still really pretty simple, and as such it can easily be unit tested with confidence; and with that being the case it feels as though it is actually safer to now implement various validation routines using the DSL’s guiding hand, like this:

private static final PasswordValidator STRENGTH_VALIDATOR_1 = new PasswordValidator()
    .mustContain().atLeast(8).and().noMoreThan(20).charactersInTotal()
    .mustContain().atLeast(2).numbers()
    .mustContain().atLeast(3).ofTheseCharacters(ALPHA_CHARS)
    .mustContain().atLeast(1).ofTheseCharacters(SPECIAL_CHARACTERS)
    .mustOnlyContain().anyOfTheseCharacters(ALPHA_CHARS + SPECIAL_CHARACTERS).or().numbers().andNothingElse()
    .mustBeMixedCase();

private static final PasswordValidator STRENGTH_VALIDATOR_2 = new PasswordValidator()
    .mustContain().atLeast(8).and().noMoreThan(20).charactersInTotal()
    .mustBeMixedCase();

private static final PasswordValidator STRENGTH_VALIDATOR_3 = new PasswordValidator()
    .mustContain().atLeast(10).charactersInTotal()
    .mustContain().noMoreThan(3).numbers()
    .mustOnlyContain().anyOfTheseCharacters(ALPHA_CHARS).or().numbers().andNothingElse();



…than it would be to code them up in the typical, and less structured way. It makes it easier to compare the difference between various validation routines, avoid bugs or assumed behaviour which doesn’t actually exist (which I discovered in the old code during this process, although decent tests should have highlighted that in the first place), and helps to keep that utility class closer to 1 thousand lines than 2 thousand o.O

 

Image | Posted on by | Tagged , , | Leave a comment