Sunday nugget 002: There is no such thing as a ‘BDD test’

Neater_9975Using JBehave or Cucumber (or whatever) does not make a test a ‘BDD test’. There is no such thing as a BDD test.

BDD is a way of developing, just like TDD (albeit typically at a more abstract level). Frameworks like JBehave and Cucumber can aid you in your behaviour driven development.

Labelling arbitrary tests ‘BDD tests’ just because you used a given/when/then syntax, completely misses the point of BDD – and devalues any real BDD that you might be doing.

And finally, as an aside, if you value your ‘BDD tests’ for any living specification they offer – then that living specification is precious! Don’t ruin it with tests which don’t serve that function in a clear and concise manner. Especially if the only reason you’re adding those tests is because ‘we do BDD’.

Posted in Uncategorized | Leave a comment

Don’t break down user stories like a plonker

Nicholas LyndhurstLets suppose the product owner gives us a story something like this:

I want to process incoming xml payloads with an encrypted block of elements contained within the unencrypted main body.

And lets suppose when we look at what that entails it’s something like this:

  1. Edit DTD to accommodate encrypted block (which will be JSON containing key value pairs to then be substituted into well formed XML as if the encryption hadn’t been used).
  2. Identify whether this payload is of the encrypted variety or not.
  3. Check that the client is allowed to use this new partially encrypted service.
  4. Obtain encrypted private key from DB, decrypt it, decrypt the AES key in the payload, use the AES key to decrypt the rest of the encrypted block.
  5. Basic validation of elements that have been decrypted (probably just that they are all present).
  6. Create a well formed XML document by putting the decrypted JSON key value pairs into their correct places within the XML document.
  7. Process the XML document as we normally would do before the new encrypted service.

Now lets suppose that this all looks too big to fit into 1 sprint. We should break it down into smaller user stories. Fine.

  1. Lets pull out the decryption tasks into their own user story.

Er… ok. How are you going to go about that?

I want a decryption component to decrypt stuff that our new service consumes.

Noooooooo!   This sucks – big time. Who wants a decryption component? No one. I don’t care about a decryption component, the PO doesn’t want a decryption component, and the customer doesn’t want one either. What we all really want is a solution – a decryption component is of no value to anyone. Now the other steps in our breakdown above don’t really make any sense because there’s a big hole in the process flow.

I want a component to parse JSON key-value pairs and to place them into an XML template in order to generate a full & well formed XML document.

I want a story to mop up everything else not already covered by the previous 2 stories.

Now there is no hole in the process flow because each step is in it’s own discrete story. Huzzah!

Kill me now. Why do we even try to fit stories into sprints in the first place? Why do we have sprints at all? Because we don’t want ‘nearly done’ but non-demonstrable code floating around for extended periods. We want to show progress frequently in order to get rich and timely feedback, rather than having to wait until the last piece of the puzzle is put in place before anything actually works, and because the concept of a story bundles functional requirements, code & tests, all into a neat little package of context.

The naively crafted stories above are well on their way to dictating implementation details, rather than specifying problems that need solutions. Especially when you start adding acceptance criteria to them.

What are the QAs going to do on the decryption story? The programmer will unit test it, expecting the QA to help him ensure it works properly, and makes sense, in the wider context of the solution to the specified problem. In reality they know that they want to test this decryption implicitly as part of the problem solution. They are going to be writing automated tests which don’t actually map to this user story – as if the test actually belongs to an epic which contains that story. But that means they can only complete their tests once we’ve implemented enough of the other stories such that we actually have something that resembles a solution. They can’t really do much with the story itself, that the programmer probably can’t handle himself with his unit testing. Moreover, even if they did it would have nothing to do with the problem solution – they’d just be testing an arbitrarily chosen implementation detail of the forthcoming solution.

What will we demo to the PO & stakeholders? Are they going to care that your language of choice can do decryption? Will they care that you crafted your decryption class in a certain way? Of course not. So then if we have nothing to demo, why did we bother breaking down the original story in the first place?  We’re now, practically speaking, in the same position we would have been with one monster story broken into several technical sub-tasks – except that would at least still retain it’s user story integrity, albeit a user story that would need a longer sprint to complete. The list of issues goes on.

Having some meetings every 2 weeks does not make a series of sprints. Arbitrarily splitting work into 2 week chunks does not make a series of sprints. Labelling a unit of work a ‘User Story’ does not make it any more valuable, nor does it make anything easier. — Me

So what should we have done? I won’t suggest ‘the’ way it should be done. Instead I’ll suggest the very first method that jumped into my head, and suggest that with some thought an even better solution might present itself. Even so, I believe that this alternative would be MUCH more desirable than what we’ve discussed so far. Make the problem easier to solve, and deliver that solution. Then flesh out the solution to solve the full problem in a later iteration. That is to say we should maintain user stories that actually serve the purpose of a user story. We should maintain that little package of problem/solution wrapped up in context…

I want to process incoming xml payloads which contain an embedded JSON payload of key-value pairs,  held within the main document body in place of regular Xml elements.

This is the whole original problem, but a little simpler. The JSON isn’t encrypted. We can implement the entire solution, with proper agile testing – possibly with some nice BDD style test specs to serve as living documentation – and have something complete, if not yet fully functional, to demonstrate at the end of the sprint. The more holistic nature of implementing a solution, as opposed to a component, helps to leverage the benefits that Scrum offers when it comes to providing team context & cohesion. It also means that we have a “potentially shippable” product increment (although perhaps it’s worth noting that the Scrum Guide now says ‘releasable’ rather than ‘shippable’); The salient point here is that a decryption class is not a product increment as it is effectively dead code until the solution as a whole is ready to make use of it.

The selected Product Backlog items deliver one coherent function, which can be the Sprint Goal. — Scrum Guide

Now that we have a working solution that we can demonstrate in it’s current form, we can add an additional user story to the backlog:

I want the new service to be able to accept and process encrypted JSON in it’s payload, instead of unencrypted JSON.

Now in a further 2 weeks at the next sprint demo, we can show off our all singing & dancing version. The existing tests can be tweaked and we can now integrate with whatever it is that is producing encrypted JSON in the first place. More importantly no kittens get harmed in the process, which is more than can be said of the original effort. This activity (generating and breaking down stories) should very rarely be difficult. As soon as it is, stop – breathe… ask yourself why you’re even generating user stories in the first place. Then ask yourself why you care if it fits into a sprint or not. Then carry on.

Posted in coding, software | Tagged , , , , , , | Leave a comment

Sunday nugget (or facepalm) 001: Jenkins != Continuous Integration

This shouldn’t need to be said, but it does.inconceivable

Unless you integrate your code continuously (i.e. at the very least every day), then you are not doing Continuous Integration – regardless of whether you have a Jenkins/Hudson build run on commit, or not.

 

Posted in Uncategorized | Leave a comment

Baby steps towards agile development 001:

spaghetti1) Quality

Disclaimer: This stuff applies 99% of the time.

You can’t exhibit agility in your software development if you write poor quality code. The same goes for tests (maybe I’d better make that step 1.5!).

A primary facet of agile software development is the agility to change direction quickly. You can’t do this when change is too expensive. Poor quality code makes extension difficult and therefore costly. Poor quality code makes refactoring very difficult, and therefore costly. Poor quality code can make automation harder (we’ll get to that later), and therefore costly.

Poor quality code can also make it more difficult to ensure that you have working software as much as possible (it’s often good to be able to demonstrate what you’ve got so far to the client, and to get feedback based on that). Your team should all be familiar with, and understand SOLID principles & the benefits of their appropriate application.

1.5) Unit Tests

Disclaimer: This stuff applies 99% of the time.

I shouldn’t really have to say this, but… Write tests! and make sure they’re good ones. You can’t refactor fearlessly without a comprehensive set of automated unit tests, as a minimum, functioning as a safety net.

The agility to change rapidly means the agility to refactor without fear. You ideally need tests to cover every required unit of functionality, and those tests need to be good quality tests – poor tests are worse than no tests as they cause fear, uncertainty & doubt. At best, poor tests are an unnecessary maintenance cost – often they are far more dangerous in the way that they mislead.

A Test Driven approach to development (TDD) is often ideal – done correctly you should only ever have code that has a specific reason to live, and that code will be written in a testable way, and will be tested out of the box.

You should never need to go to bed at night worrying whether you’ve broken something!

Bearing in mind that unit tests constitute a maintenance cost, there can be such a thing as too many tests (as well as overly fragile tests, etc, which I won’t go into here) – something I’ve heard referred to as ‘TDD OCD’. For example, if a particular ‘unit’ of functionality has no branching logic, then it does not need it’s own test – it is perfectly acceptable to test that unit indirectly as part of another, broader test.

See http://osherove.com for some good unit testing resources.

If you don’t have good  test coverage (and by that I mean that you actually test – rather than just cover) then you are almost certainly not going to be agile in any meaningful sense of the word.

Imagine an acrobat – with a safety net he can show off his amazing agility to it’s fullest, and should he fall he can simply climb back up the ladder and try again. Without a safety  net he might show amazing agility right until he falls and dies. Development without such a safety net is equally reckless, and in reality that means that any real agility will not be realised, due to  fear.

Posted in coding, software | Leave a comment

Agile is for losers…

…real men embrace agility.

The development world seems to be full of ‘Agile’ pretenders. If you were to object to my saying that ” there is no such thing as ‘Agile’ ” then I’m sorry to say that there’s a reasonable chance that you too fall into this category. Let me explain…

There is no ‘Agile manifesto’. There is a ‘Manifesto for agile software development’, which lays out some general principles which one might like to follow in order to help them develop software with agility. Of course it seems entirely reasonable to shorten ‘agile software development’ to simply ‘agile’, but herein lies the same problem we see in many other aspects of our day to day lives – words are allowed to be abused and end up losing any sense of meaning. Worse still they end up meaning something entirely different than they did to start with – for example, the word ‘trolling’ as is commonly used in the mainstream media these days has almost no resemblance to its original meaning. We’ve lost a useful word. The ‘agile’ community has suffered an even worse fate as the word ‘agile’, across sweeping parts of the industry, no longer has any relation to the concept of agility. Not only have we lost a useful word – we’ve managed to pervert the philosophy of agile software development as it is understood in many people’s minds.

We’ve reduced agility to mindless process, wrapped up in a pretty new vocabulary. An alert mind will spot the obvious paradox and endeavour to educate themselves to correct their misunderstandings. A keen but less alert mind will suffer the pain of contradiction and endure the inevitable failure (or at least a hefty dose of strife) as long as it takes them to realise that something is wrong; At which point they will deduce either that ‘Agile’ is a fraud, or that they too need to educate themselves so that they might do it properly.

A worrying but very common (in my experience, at least) trend is that of a third kind of mind. This mind persists in having faith that such a prescriptive process (invariably involving moving cards around a board) followed obediently, despite it’s apparently voodoo nature, will yield some kind of magical gain in productivity. These people suffer just like the other 2 kinds I’ve mentioned – but this time they are doomed to forever suffer the pain caused by trying to resolve contradictions, thanks to their stubborn belief in magic.

So, lets see if you’re an agile pretender …

Posted in Uncategorized | Tagged , , | Leave a comment

Story Points, Bugs & asking yourself “To What End?”

Some people like to invent complication. I’ve noticed this a lot when it comes to agile software development among teams that are following the fashion without really understanding what they are doing, or why they are doing it; and it doesn’t help that since seventeen or so guys met up in a ski resort in Utah over ten years ago, a whole cottage industry has sprung up around agile coaching & training – of which I’m certain a significant number lean more towards being snake-oil salesmen selling ‘Agile’ the noun (as Dave Thomas – an original signatory of the agile manifesto – laments here).

I’ve got a few ‘Agile’ items in my backlog to moan about later, but for now I wanted to talk about a particular irritation – story points & bugs. I keep finding myself in discussions with people asking what is the proper way to estimate bugs? Often this apparent difficulty stems from the idea that story points represent business value deliverable to the client – and as such we shouldn’t assign story points to bug fixes because they are mistakes, rather than value to the client and that our velocity should only reflect our ability to deliver value to the client, rather than our ability to clean up after ourselves when we mess up. Here are a few quotes for clarity:

“Our theory is that points, and our resulting velocity, should demonstrate business value to our customers. Customers shouldn’t “pay” for bug fixes. The product should be of a valuable quality.”

“Hone skills for collaborating with the customer. Story points represent what the customer values. Better to hone the skill of writing and estimating stories, as the vehicle for delivering that value, instead of trying to figure out the convoluted ways to write a story about fixing a memory leak.”

“I don’t assign story points to defects since I think story points are a measure of the team’s ability to deliver value in an iteration. Fixing defects, while important and valuable, is NOT delivering value…it’s a drag on the team’s ability to deliver value. The more the quality is built in –> the less defects arise –> the more bandwidth the team has to deliver new stories –> the velocity should be seen to increase. Assigning story points to defects, I think, pollutes this.”

“…fixing defects, while important and valuable, is NOT delivering value…insomuch as value is measured as quality product in the hands of customers. If we’re using story points to measure our effort to make ANY changes to the product, be it new functionality or fixing defects, then have at it and assign story points to defects.

If we’re using it as a measure of a team’s ability to deliver high quality product into the hands of the customer (my preference), then no, don’t assign story points to defect work.”

 

I think that the common theme here that underlies the misunderstanding of story points is the apparent misunderstanding of the term ‘business value’. I think that this is often the case because people have failed to ask themselves the most important question anyone should always ask when developing software – to what end? Perhaps the dark side of the previously mentioned cottage-industry preys on those that are quicker to buy a productised – step by step – solution than they are to wonder: ‘to what end?’ and in the process grok the rationale behind agile development methodologies and principles. Give a man to fish and he’ll eat that evening; Send him on an Certified ScrumMaster course and he’ll progenate a team of jaded and cynical developers who feel no joy in their supposed agility – thanks to the nagging dissonance associated with constantly trying to resolve contradictions which needn’t exist if they only knew ‘to what end?’ they were adopting these methods & techniques, and with that understanding were able to choose and apply them appropriately.

 

Hold on – ‘Bug’ is a pretty ambiguous term…

Yes it is. So to put this post in context, we’re talking about generally well defined defects that have managed to be introduced in a sprint and have found their way outside despite the associated stories being ‘done’. We’re talking about bugs that should have been caught, as opposed to the kind that only manifest themselves when a butterfly flaps its wings a certain way.

Other kinds of bugs include those of similar origin, but which aren’t well understood or defined – these would likely require a spike as it would make no sense to relatively size a poorly understood problem. Although there’s no reason that spike couldn’t spawn point-estimated task if it was worth doing so.

One last scenario worth mentioning is bugs not introduced during a sprint, but the kind that might exist in a backlog to address defects in an existing legacy system. Lets assume that these are all well understood by now so that we don’t need spikes. Should we assign story points to these? No – these probably shouldn’t even be part of the Scrum (i’m assuming Scrum as it’s pretty ubiquitous these days) paradigm – particularly if they are all individual and discrete issues (as they typically are). Why on earth would we think that Scrum or even ‘Agile’ (the noun) is particularly appropriate? To what end would we introduce such a framework?

That’s the subject of another rant I’ll get around to sometime, but to cut a long story short we use agile methods and techniques to achieve a state of agility in the business:

  • Agility to respond to change in a dynamic global market.
  • Agility to fail fast and fail cheap.
  • Agility to minimise time to market in order to maximise return on investment.

 

Those kinds of bug fixes, as discrete individual items, don’t participate in the tight feedback loops that facilitate such agility in the first place. How will Scrum, when fixing these defects, help us to achieve those attributes of agility?

 

To What End?

Story points exist to decouple our estimation process from units of time. It’s as simple as that. Why?

We live our lives in hours and days, and our sprints (if we’re using something like Scrum) are measured in aggregations of those days. Our releases are probably planned in aggregations of those sprints, etc. These are all fixed measurements and their relations, in a quantitative sense, always remain fixed relative to each other (except for leap years perhaps!); The problem that this poses is that it makes it incredibly difficult to estimate accurately (in hours for example), over an extended period of time with all kinds of changing factors which influence our ability to deliver value to the client, just how much value we can deliver in any given timeframe.

If we can decouple our estimations from units of time we can decouple our ability to give useful estimates from the factors that effect the time it takes to deliver – team alterations, changing product complexity, increasing productivity with familiarity, physical environmental factors, or anything else you might imagine.

By sizing stories relative to one another, using points, we can measure our velocity – our ability to deliver value to the client – in a way which is self-correcting without ever having to estimate anything other than how large one story is compared to another.

Different people in different teams can all easily estimate, with reasonable accuracy, the size of one story compared to another – regardless of how long it would actually take each of them to implement those stories compared to their colleagues. With a little effort and discipline we can have discrete teams with different velocities, all normalised according to a common understanding of story point allocation; This grants a product owner otherwise unimaginable flexibility when it comes to planning and allocating stories to multiple teams.

 

So this is the ‘end’ I alluded to when we wondered what story points were used for. So to what end would we conclude that story points shouldn’t be used to estimate bug fixes?

 

Business Value

Simply put, business value is something that the client wants. Something that the client cares about. Something they value.

The client doesn’t care whether you need to spend time upgrading your development machines to the latest OS version, or whether you need to spend 15 minutes every day rebooting your flakey git repo, or whether you need to spend more time testing because quality is slipping (a new screen roughly similar to a previously built screen is worth roughly the same to him whether you spend twice the time or half the time).

Your client DOES care about getting stories delivered, and your client DOES care about emergent bugs being fixed (hence bug fixes do provide business value – otherwise your client wouldn’t care whether you fixed it or not).

With this clearer understanding of business value, is there any end to which you can imagine that not assigning story points to estimate bugs will be beneficial? Lots of people like to assign a spike instead. The trouble with this is that (in the context which I set out earlier) the bug definitely needs to be fixed, and spikes are by definition time boxed efforts (which implies that bug resolution is optional). Secondly, which ever word you use to describe the allocation of resource involves an implicit conversion from time to story points – otherwise how do we know how many points fewer to commit to this sprint? The point of time-boxing spikes is that it makes no sense to try to size them relative to the usual stories; If it does make sense to compare them to the size of stories, then it almost certainly shouldn’t be a spike. The only reason to do so would be to accommodate this faulty ‘bug fixes aren’t business value’ logic.

 

How I would account for Bugs

In the context outlined earlier, I like to keep it as simple as possible unless there is a compelling reason not to. I would size the bug relative to other stories as usual – lets say we think that it will be roughly 2 points. It’s important that we assign these two points because they represent work that needs to be done, which means that there is 2 points worth of other work that won’t fit into the sprint.

We also need to remember that the original story from which this bug leaked (if we can pin it down like that) was estimated with a definition of done which, presumably, implied that the deliverable was bug-free. The fact that we’re now addressing that bug means that the original story isn’t in fact complete, and if we kept our velocity as it is we would go into our next sprint estimating how much work we can ‘nearly finish’, rather than actually finish (especially if this bug leakage is becoming habitual – otherwise in a one off situation this may not be worth the bother). So I’d reduce our velocity by 2 points, for example, too. That means that this sprint we’ll actually plan to deliver 4 points-worth fewer stories, and then going forward our velocity will be 2 points less until such time that we stop leaking bugs and manage to gain some velocity back.

 

Posted in software | Tagged , , | Leave a comment

Java lacks multiple inheritance of implementation because it’s not necessary?

spoon

Java itself isn’t ‘necessary’. But it is useful.

Is multiple inheritance (of implementation) dangerous? Well, even a spoon can be dangerous in the hands of a moron.

How about “Because there is a better way without multiple inheritance.” – spoken absolutely?

Lets look at this example of the observer pattern which I’ve roughly taken from an old Bob Martin document…
We have a Clock which is a self contained class which understands the way in which time is measured and represented. It ticks every second. We’d like to keep it self-contained and with a single responsibility.

We also want to implement an Observer pattern, whereby objects can register themselves as Observers of the clock so that they receive a notification whenever the clock updates it’s internal representation of the time (we don’t want to continuously poll the clock, wasting cpu cycles, when the time will only ever be updated once every second).

Given that Java doesn’t support multiple inheritance of implementations (ignoring some blurred lines introduced by Java 8 just recently), an implementation of the Observer pattern would have to look roughly like this:

public class Clock {

   public void tick() {
      // Update time each tick.
   }
}

interface Observer {
   public void update();
}

interface Subject {
   public void notifyObservers();
   public void registerObserver(Observer observer);
}

class SubjectImpl implements Subject {

   List<Observer> observers = new ArrayList<>();

   @Override
   public void notifyObservers() {
      for(Observer observer : observers) {
         observer.update();
      }
   }

   @Override
   public void registerObserver(Observer observer) {
      this.observers.add(observer);
   }
}

class ObservedClock extends Clock implements Subject {

   private Subject subjectImpl = new SubjectImpl();

   @Override
   public void tick() {
      super.tick();
      notifyObservers();
   }

   @Override
   public void notifyObservers() {
      subjectImpl.notifyObservers();
   }

   @Override
   public void registerObserver(Observer observer) {
      subjectImpl.registerObserver(observer);
   }
}

 

Now compare the above code with the implementation using a hypothetical Java which supported multiple inheritance of implementations:


class MIObservedClock extends Clock, Subject {

   @Override
   public void tick() {
      super.tick();
      notifyObservers();
   }
}

class Clock {

   public void tick() {
      // Update time each tick.
   }
}

interface Observer {
   public void update();
}

class Subject {

   List<Observer> observers = new ArrayList<>();

   public void notifyObservers() {
      for(Observer observer : observers) {
         observer.update();
      }
   }

   public void registerObserver(Observer observer) {
      this.observers.add(observer);
   }
}

 

Now… did writing that code unleash any zombie apocalypse? Which version is cleaner? Which is more elegant? Which is more decoupled? Which is ‘better’ ?

I’ve heard ‘Uncle’ Bob refer to Java interfaces as ‘a hack’, made for the sake of simplicity – but not for us. For the sake of the JVM implementation. I don’t know, but it’s probably true.

 

TL;DR?

Java interfaces are dumb.

Posted in coding, java, software, Uncategorized | Tagged , , | Leave a comment