The two kingpins in this approach will be data and automation, working in tandem to convert insights about what needs testing into rigorous automated tests. But first, let’s consider why it remains so challenging to design, develop and test in-sprint.
20 years on, siloes remain THE challenge in software delivery
As the Agile Manifesto approaches 20 years old, the software delivery lifecycle remains riddled with silos. These silos not only create time-consuming miscommunication, they also amplify manual effort. Each time information moves from one silo to the next, it needs to be converted from one format to another:
These “information hops” delay releases and introduce defects as misinterpretation creeps in at every stage. Let’s now look at each silo in more detail.
From a test and development perspective, gathering requirements in text-based documents and disparate diagrams is simply not fit for purpose. The fragmentary written user stories and documents are far removed from the precise logic that needs developing. Meanwhile, there is typically little or no formal dependency mapping between the text-based formats and static diagrams.
Software designs therefore introduce bugs when translated into source code, in turn creating time-consuming rework. In fact, multiple studies estimate that requirements are responsible for over half of all defects,[i] while further research estimates that developers spend half their time fixing bugs.[ii] Design defects therefore take up a large chunk of the time that should be spent developing new functionality.
The static nature of requirements further increases manual effort in testing. “Flat” documents and diagrams are not ready built for automation, and often testers are forced to convert designs manually into test cases, data and scripts.
In addition to wasting time, these manual processes undermine quality. A simple system today will likely require thousands of tests before a release. Faced with informal and incomplete requirements, testers cannot systematically or automatically identify and create the tests that need executing before a release.
Manual test design instead focuses almost exclusively on “happy path” scenarios, over-testing these at the expense of scenarios most likely to cause bugs. Meanwhile, out-of-date and invalid tests pile up, creating test failures that push testing further behind releases.