Ever wondered how the work of the DCB 0129/0160 Clinical Safety Officer integrates into an organisation’s testing activities? Our three-stage framework sheds light on an otherwise complex area.
Software testing is a vital component of safety assurance in health IT. We find most software manufacturers and Trusts have robust testing procedures in place. But how can we use these assurance activities to support our DCB 0129/0160 Safety Case?
At Safehand, we use a three-stage framework for the basis of demonstrating how software testing evidences the Safety Case’s claims:
Stage 1 – Testing Approach
There are many methods of software testing, some more rigorous than others. In the Safety Case, your responsibility is to convince the reader that the approach is sufficiently thorough that any (or at least the foreseeable) safety-related defects have been identified and fixed. A sound testing strategy might include:
Increasingly manufacturers are making use of workflow tools such as Atlassian’s popular JIRA product. Functionality in these tools allows manufacturers to ensure that requirements and specifications are accurately aligned to tests. Where those specifications relate to Controls cited in the Hazard Log, this traceability adds up to a solid assurance strategy – which deserves explanation in the Safety Case.
We find that it is surprisingly common to work with software suppliers and Trusts who have never actually taken the time to formally document a test strategy. This frequently leaves state-of-the-art assurance work and valuable test evidence hidden from the outside world. Importantly, writing down the test approach allows the Clinical Safety Officer to set out how their activities integrate into the strategy. The Safety Case is a useful vehicle to convey this important work.
Stage 2 – Validation of test coverage
However rigorous our testing methodology, the safety argument only stacks up if one can demonstrate that key safety-related tests have actually been conducted and passed. Testing is frequently cited as a control in the Hazard Log and well-written testing controls set out precisely what should be validated and the expected outcome. By tracing these controls to actual tests, one can demonstrably verify that the test coverage, at the very least, is aligned with the Hazard Log. It’s not uncommon in undertaking this exercise to find gaps in the test coverage and, as a result, a few additional tests might need to be created.
Another useful exercise is to check whether the testing controls in the Hazard Log are aligned with regression test scripts or user acceptance tests. Afterall, if a test is being relied upon to manage clinical risk then it’s probably appropriate to make sure that the test is run for every release. The regression test pack is a natural home for such a test.
Stage 3 – Management of identified defects
The Hazard Log is often rich in theoretical risks, things which could go wrong, but frankly probably won’t because of the controls which have been put in place. But when a defect is identified, we may be one, very real, step closer to harm. Defects, therefore, need to be assessed to determine whether they could contribute any significant clinical risk.
Well….sort of. Many defects, in fact most defects, never see the light of day in terms of the live environment. They are identified and quickly fixed. Early in the development lifecycle, during unit testing for example, the identification and fixing of bugs might be happening on a minute-by-minute basis. These types of bugs rarely warrant the attention of the Clinical Safety Officer.
But the closer to release we get, the more likely it is that a defect identified at that stage might just be kicked down the road and be fixed in a later release. In practice this means that the release will go live with the bug exposed to the real world. These defects can pose a very real threat to safety and deserve some close attention by the Clinical Safety Officer.
The CSO will need to ascertain the level of risk associated with these defects and ask whether this can be justified. From time-to-time, difficult decisions may need to be made.
So, in summary…
Remember, the Safety Case needs to be convincing, not just a random assembly of facts. The relationship between software safety and testing can be complex at times. At Safehand, we find that this simple three-stage framework allows the approach to be simplified in a manner which can easily be summarised in the product’s Safety Case.
Dr Adrian Stavert-Dobson is the Managing Partner of Safehand, independent consultants in DCB 0129/0160 compliance, and the author of Health Information Systems: Managing Clinical Risk.