Why Change Fails: Part 3 of 3 – Technology

March 29, 2016

In this final part of our series of articles on change, we look at technology-related factors that contribute to the failure of change initiatives. We will specifically focus on the less obvious perspectives, from hidden complication, through to loss of control and how consumers generally determine how, or indeed whether, to embrace technology-led change.

Firstly let’s consider technical debt. The term technical debt is of course familiar to many, and has been an all-too-common part of technical vocabulary since coined by Ward Cunningham, creator of the first wiki, back in 1992. In essence, it’s the additional cost that will be incurred in the future as a result of ‘quick and dirty’ changes made now. For example, each new addition at this stage, such as the installation of a third-party application on the PC or allowing users to develop their own software, will typically make future change more difficult.

Additionally, the older the technical debt, the more expensive it can often be, as it becomes increasingly difficult for new staff to unpick. So, although technical debt doesn’t directly prevent change, it can significantly increase change drag. Sometimes, that ever-increasing change drag becomes accepted as the cost of delivery, but sometimes it can reach the tipping point that pushes time and cost to the point that they become unacceptable to the stakeholders.

Secondly, let’s consider shadow ITOften also referred to as stealth IT, personal IT or just the consumerisation of IT, this is the increasingly used term that describes technology solutions that are used within organisations without explicit technology organisational approval, or often even awareness. While it’s an interesting topic in its own right, it also can be seen to be a factor in change failure for a number of reasons:

  • Budgetary diversion: Estimates of shadow IT costs within many enterprise organisations vary depending on the source, but figures of 20 per cent of the total IT spend are not uncommon. Gartner figures even indicated that this number may rise to as much as 35 per cent. Although often spent with the best of intentions for each instance, it also represents a significant amount of funding that is then not available for enterprise technology delivery or subject to any enterprise licensing efficiencies.
  • Requirements starvation: In many cases, the requirements being met by Shadow IT are not on the book of work for the enterprise technology teams, typically because the view is that it would take them too long to deliver it and that what is already in place will suffice. As users and teams strive to protect existing shadow IT functionality – be that a macro, a report, a cloud solution, MI, or something more involved – invisible change anchors can resist enterprise change that would degrade or attempt to replace shadow functionality.
  • Value Perception: Although success and failure should not be subjective measures, but rather metrics-driven absolutes, the pace of change and flux within many organisations mean that this is often the case. Because the users see and experience the value from Shadow IT, the enterprise IT deliverables can often receive a rather less enthusiastic response. So while this may not necessarily result in failure, it can certainly limit the amount of perceived success.

Now let’s look at timing. History is littered with stories of technology failure because it’s either ahead of its time or may have just missed the moment in time where it could have been successful. So what contributes to this? Well, to get some insights into this we need to look at a few examples:

  • Dvorak keyboard. The accepted story is that the qwerty keyboard was specifically designed to slow down typing, so that high-speed typing on mechanical typewriters didn’t cause jams. The Dvorak keyboard, in contrast, was proposed by Dr August Dvorak in 1936 to minimise finger motion and reduce errors. Studies show that the Dvorak keyboard is faster, typically a 5-10 per cent improvement over qwerty. So although the Dvorak keyboard arguably is better suited to current technology, qwerty is just too established and the improvements are generally not considered worth the transitionary efforts. So qwerty is considered good enough and dvorak keyboards have passed into obscurity.
  • A similar story can be seen when looking back at the birth of home recording and the battle of Betamax and VHS formats from Sony and JVC respectively. Although Betamax was generally considered to have the superior picture quality, early VHS tapes lasted for 2 hours (rather than one for Betamax) allowing for a whole film on one tape. Additionally, Sony focused its marketing on high-end consumers, whilst JVC targeted the larger rental market. In this case, the picture quality of VHS was good enough and it provided additional features to a broad audience. Again, the superior technology is relegated to the history books as a failure.
  • Coming more into recent years we can look at Google Glass. Again the perceived failure has nothing to do with the quality and innovation of the technology itself, but rather that consumers didn’t really see what problem it was solving and why they should change from what they already knew and were familiar with.

The reasons for looking over such a wide time period and set of lenses is an important one: the facets of human nature, psychology of mass adoption and the perception of success are things that we see in many areas and also consistently over time.

So what is this telling us about human nature and how it applies to both the adoption of technology, and our success with technology change?

In the enterprise IT drive for future-proofing, is there a danger of building a technically superior product that just misses out on success due to over-engineering and subsequent increases in cost and delivery time? Technologists get excited about technology, while consumers, users and stakeholders get excited about what it means to them, their daily lives and businesses.

As with all change, the first question to consider is whether the change is actually necessary. Is the current solution not good enough? Does the new solution offer significant improvements or additional functionality of value to consumers? Is the transitional effort between old and new worth the benefits? Unless a resounding “Yes!” is received from consumers in response to these questions, then we really shouldn’t be too surprised if the technology change is perceived as a failure.

A marvel of innovation it may be, but a failure nonetheless.

For the other articles in the series, click on:

Why change fails – Part 2 of 3: Process

Why Change Fails: People

Become a Brickendon Change Leader

What can we help you achieve?