The illusion of communication – A silent hazard of Health IT systems
Home / Blog / The illusion of communication – A silent hazard of Health IT systems
Clinical Risk Management Training
9 June 2015, by , in Blog, No comments

When we picture the hazards which Health IT could introduce, some risks to patient safety immediately come to mind; corrupt data, the wrong patient, missing data, system unavailable, etc. But hidden within the depths of Electronic Medical Records (EMRs) and other Health IT solutions lays a more mysterious hazard, one which can be hard to characterise and may not even become apparent until a system is used in anger.

Most users of Health IT harness the technology for more than just simple capture and retrieval of patient notes. In some clinical processes Health IT represents the mainstay of communication between health professionals. When multiple individuals are involved in a complex workflow such as investigation ordering or referral management, Health IT often takes the role of project manager, facilitating co-ordination and co-operation of disparate contributors.

For example, if I order a blood test this might trigger a sample collection task for a phlebotomist, a laboratory request to undertake the analysis, for a report to be generated and a prompt to remind me to review the result. Logic dictates that there are only two possible outcomes here, a result or a timely reason why it wasn’t possible to obtain a result. Either way as a clinician I can take appropriate action, I’m informed. I have the opportunity to work around whatever difficulties there may be.

But an interesting hazard arises when there is a third possible outcome, when communication breaks down and what I get back from the activity I’ve initiated, is well…nothing. No system errors, no alarms, no warnings – just an electronic black hole silently consuming data, interrupting business, delaying clinical care. Without the process originator being alerted to the problem, detectability of these hazards remains low thus the opportunity to work around the issue simply isn’t presented. As the initiator of the process I’ve upheld my part of the bargain, I’ve adhered to my training and local policy. I don’t see a need to have to manage the project manager.

What results is not communication but the illusion of communication, an assumption that as a user we’ve been heard, an already stunted conversation transformed into one which has stalled. Add to this the demands of a busy clinical environment and it is not surprising that systems occasionally suffer from tasks being ‘lost to follow-up’.

In complex workflows there may be dozens of underlying reasons why a process might stall -technical issues such as messaging failure, configuration of workflow statuses or human factors such as failing to update the system when a task has been completed. But it is the sheer number of dependencies, the failure rate for which combined, can be significant.

There are a couple of approaches to mitigating the risk in these situations. Firstly one can identify the dependencies in the workflow chain and ensure that the design, configuration, testing, monitoring and operation of these components are afforded a degree of rigour which they deserve. The limitation of this approach alone is in being able to foresee all the potential failure modes and effect practical risk reduction.

A second and supplementary line of attack is to monitor not the performance of individual components but the process as a whole. For example, a system administrator may be able to query the database to retrieve all orders which have remained in a status of ‘Pending’ for longer than five days. Even better, the system may have screen to inform me of the status of tasks I have initiated. These creative solutions provide a means of alerting clinicians to stalls in the process irrespective of the cause and can provide the seed for further investigation and root cause analysis when things do go awry. Intermittent, end-to-end process validation may not provide the immediacy of detecting failure but the ability to employ a backstop detection can provide a useful means of controlling complex risks.

When we’re building our hazard register for systems driven by workflow we need to ask ourselves some key questions. Could this process be interrupted? If it was interrupted, would I as a user ever know? What could be done to minimise the risk of interruption? If something was to fall through the net, how might we detect that?

Dr Adrian Stavert-Dobson is the Managing Partner of Safehand, independent consultants in clinical risk management, and the author of Health Information System: Managing Clinical Risk.

About author:

Leave a Reply