Why change fails – Part 2 of 3: Process

December 14, 2015

This is the second in a series of three articles published in our Brickendon Journals. The articles explore some of the underlying patterns associated with change failure by looking at the problem from three distinct perspectives – people, process and technology. For the first article click https://btcwebdev.wpengine.com/?post_type=article&p=5484  The third will appear in the first journal of 2016.

After having examined the people aspect, we are now looking at the process-driven reasons for change failure. As with the opening article, this isn’t going to be an encyclopaedic view of change failure examples, but rather a perspective on some of the underlying patterns and how they contribute to the lack of change success we unfortunately see in many programmes today.

One of the primary process-driven reasons for change failure, and often no small amount of friction between different parties, is the Software Development Lifecycle. Of course, that may be no great surprise to many of you… but the reason for it may not be what you expect.

There are many different types of methodologies from Traditional Waterfall, through Incremental Waterfall and Iterative Agile, to Incremental Agile – each with its different characteristics, application and mind-set.

Given recent developments, it’s also worth touching on a few particular areas that add new perspectives and benefits to the more traditional process views above, both of which aim to build on the Agile philosophy and scale it to the enterprise – DevOps and DAD.

  • DevOps: Many development methodologies, such as Agile software development, encourage collaboration between the analysis, design, development, and test functions. However, functionally silo-based organizations often have little collaboration between “change” and “run” functions. Both change teams and run teams fundamentally see the world, and their roles in it, differently. Each believe that they are doing the right thing for the business… and they are both correct. DevOps strives to enable the benefits of Agile development to be felt at the enterprise level by providing a responsive, yet stable, IT “run” that can be kept in sync with the pace of innovation coming out of the development process. So if you have Agile development teams, but are only seeing Waterfall realisation of business benefits… then DevOps could be of significant benefit. However, the implementation effort, due to its very nature, can be significant.
  • DAD: Agile teams are often, and ideally, self-organising and with a focus on development. Disciplined Agile Delivery (DAD) extends the development-focused lifecycle of Scrum to address the full, end-to-end delivery lifecycle from project initiation all the way to delivering the solution, and value, to the enterprise.

Much time, effort and pain is often rooted in heated debate around which of these methodologies is most appropriate, particularly within the middle-ground: Iterative Waterfall vs. Iterative Agile. But whatever you choose to use, and however long you’ve been using it, the question to ask is how well are you doing it?

Typical symptoms of poorly implemented methodologies are numerous, but most commonly include:


  • Working backwards with fixed scope: Devising a plan by working backwards from a known target go-live date is of course a valid approach where scope can be tailored to fit the available capacity. Devising a plan by working forwards from a clear set of requirements and scope to reach a projected end date is also a valid approach where the scope is fixed. But fixing an end date, dividing up the time between SDLC phases and assuming the fixed scope can be delivered by an existing team, is typically not so successful. What’s surprising is how often change starts with this approach.
  • Keeping busy: Not being too concerned about having a clear view of what to develop, but getting started on something while that view clarifies. Of course we’ve all done it as circumstances dictate, but it needs to be managed incredibly carefully. For example, are the development team building core infrastructure, or assumed business functionality?
  • Box ticking: Fixation on “ticking the box” for the milestones, rather than delivering the value envisioned on inception.
  • Blame game: Using stage gateways, and sign-offs, as a means of ensuring that any delays are clearly the fault of some other group. Examples include: Technology blaming Business for delays with Business Requirement Documents; Business blaming Technology for not being able to deliver everything on the long list; and everyone blaming an external group when something comes in from left-field. In essence the focus is no longer on joint success, but simply making sure that the failure isn’t yours.


  • No documentation: Although documentation artefacts differ within Agile, it’s really not just phased Waterfall with no documentation.
  • No control: Due to the nimble nature of Agile delivery teams, this is often exploited (usually through misguided enthusiasm rather than deliberately Machiavellian behaviour) to throw in last-minute changes, relax go-live controls and accept inadequate quality controls.
  • No transparency: Although those within the Agile machine (product owners, Scrum masters, team members) have a clear view on what’s going on, often the broader group and enterprise have very little visibility.

Do any of these symptoms sound familiar? Are questions being asked around whether the most appropriate methodology is being used within your organisation? Perhaps you need to be asking a different question.

It’s not so much the selection of the methodology that needs to be questioned, as that is often rooted in technology momentum, company character, personalities, market trends and CV building – and of course available skills of current staff. Rather it’s a question of quality. No matter what the choice of methodology, is it being well implemented?

So what does “good process” look like? Well, any successful enterprise-level change process needs a clear understanding of what “complete” looks like, provides clear measures of progress towards “complete”, identifies and removes any risks that could prevent the programme reaching “complete”, and guarantees timely corrective actions and decisions to maintain progress towards “complete”.

In short, the vital thing with process (although some will suit certain situations better than others) is less the actual choice, and more the quality and rigour with which it is implemented.

Sloppy process leads to failure… whatever the process.

Become a Brickendon Change Leader

What can we help you achieve?