In common with many other Austrians, I enjoy “uphill skiing” – attaching skins to my skis, hiking up a mountain, and then gliding back down. This requires me to carry a certain amount of gear.
It’s a delicate balancing act. I don’t want to skip something I might really need – in case I get caught in an avalanche. But, if I bring too much, I won’t be able to achieve the speed and agility I desire. In the worst-case scenario, I could have a heavy pack stuffed with too much of one thing and not enough of another, leaving me both overloaded and unprepared.
This is a situation in which most enterprise organisations find themselves with regard to software testing. Guilty of performing both too few and too many tests, they’re left overexposed to risk and unable to pivot fast.
Custom apps – too much testing
Overtesting is prevalent among bespoke apps, especially when testing is outsourced to service providers whose payment depends on the number of tests defined and automated. When testers are measured by the volume of tests they produce, organisations will typically get more tests than they need. With releases delayed while these tests are created, executed, and updated, it’s unsurprising that testing is commonly cited as the main bottleneck in the application delivery process. And they’re not always the right tests. Instead, there can be multiple tests, each checking the same thing over and over.
One way of addressing this is to focus on business risk coverage rather than the number of test cases.
Consider the 80/20 rule, in which 20% of effort creates 80% of value. Applied to software development, this means 20% of an organisation’s transactions represent 80% of its business value – and 20% of its requirements can cover 80% of its business risk.
Even if testers are trying to cover an organisation’s top business risks, as opposed to just creating a certain number of tests, it’s not easy to get the right tests. By working intuitively, most teams will achieve only 40% risk coverage, accumulating a test suite with a high degree of redundancy. Around two-thirds of tests don’t contribute to risk coverage – but they do make the test suite slow to execute and hard to maintain.
Test design case strategy
If given an accurate assessment of how application functionality maps to business risk, testers can cover those risks extremely efficiently. This represents a huge opportunity to make testing faster and more impactful. By understanding how risk is distributed across an application, and which 20% of transactions correlate to that 80% of business value, it’s possible for testers to cover the top business risks without exorbitant investment.
Once it’s clear what really needs to be tested, the next step is to determine how to carry out those tests as efficiently as possible. From a risk perspective, not all requirements are created equal. And the same is true for the tests created to validate those requirements. A single strategically-designed test can achieve as much risk coverage – if not more – than 10 tests intuitively designed for the same requirement.
But, by following an effective test design case strategy, testers can test the highest-risk requirements as efficiently as possible. Guiding them towards the fewest possible tests needed to reach a risk coverage target, it’ll help ensure that, when a test fails, the team knows exactly which application functionality to investigate.
When it comes to testing custom apps, less is more – more value to the team and stakeholders, more time to spend on implementing tests in order to release on time, and more agility when a project changes course and the test suite needs to be updated.
Packaged apps – not enough testing
At the opposite end of the spectrum are packaged apps such as SAP and Salesforce. The most common strategy for testing a change in these applications is not to test the change at all.
However, when an update needs to be deployed into production, most organisations will make their key users test it. But key users don’t like to test. Testing packaged app updates can be a lengthy, manual ordeal, and business process tests can be outdated or undocumented, making for a frustrating and error-prone process. And, of course, these testing duties are an addition to a key user’s already busy schedule.
It’s unsurprising that testing is commonly cited as the main bottleneck in the application delivery process
To address a lack of effective pre-release testing, operations teams routinely add a “hypercare” phase immediately after an update goes live. Here, developers and project staff are put on standby to fix any emergency issues that arise during production – issues that would be significantly easier and cheaper to fix if discovered in pre-release testing. Most SAP enterprise customers favour this deployment strategy, although key user testing typically lasts one or two weeks, and a hypercare phase can last up to three months.
Lengthy and expensive
Unfortunately for those organisations that rely on key user testing and hypercare to test their packaged apps, most packaged app vendors are delivering more frequent updates than ever, requiring customers to keep up.
But there’s a better alternative. Analysing an organisation’s entire SAP ecosystem overnight, change impact analysis reports on which changes pose business and technical risks, which tests should be created or run based on those risks, which tests cover frequently changing “hot spots” and should therefore be automated, and which custom code isn’t being used, and doesn’t need testing.
There’s one consideration, however. The complexity of large packaged apps means a change in certain central components could affect other objects, which will be shown as “impacted”, thus diminishing the technique’s effectiveness. To lighten their testing load, organisations should narrow down the tests and, by considering object dependencies, focus on the “most at risk” objects.
Data – not enough testing
As modern applications ingest, integrate, and transform vast amounts of data, there are countless opportunities for data to become compromised and very few formalised processes ensuring data integrity across the entire data landscape.
An organisation may have managed to achieve the right amount of testing across all its custom and packaged apps that collect data and transform it into valuable information. It’s unlikely, though, that the underlying data is regularly tested. While each of a system’s components might look like it’s performing as it should, if the data is off, the business need isn’t being met. Rather, the business is being placed at extreme risk.
Assumed a packed application introduces a subtle change to data formats, preventing one out of every 100,000 records from being processed. This could go unnoticed until an irate customer complains. Likewise, a new status field could be introduced, only for a team’s bug fix to overwrite it a month later each time the user profile is updated.
By constantly checking data as it enters and moves through these applications, automated “quality gates” could catch such issues as they’re introduced, enabling them to be eliminated before they impact the business.
To date, financial, insurance, and healthcare companies have achieved some impressive results with automated end-to-end testing, such as the ability to test 200 million values in just 20 minutes. I’d say it’s essential for any organisation whose applications consume, manipulate, or output data. Without it, subtle data issues can quickly snowball into a crisis for which you’d need an avalanche shovel to dig your way out. Pack wisely.
Wolfgang Platz
Founder and Chief Strategy Officer, Tricentis