Health IT systems rarely cause harm in isolation; instead they form just one component of a complex ecosystem in which human operators co-exist and interoperate with technology. It is also naïve to conclude that any non-technical influential factor in contributing to risk exists exclusively in the domain of ‘user error’. Whilst it is the case that all humans are fallible and perhaps even prone to error, to take this to the extent of attributing individual blame in the risk management process is unhelpful.
In the main humans make mistakes not because they are reckless or malicious but because they operate in an environment which allows or even facilitates those errors being made. Even if we succeed in temporarily correcting human behaviour, if adverse conditions prevail we continually require of those around us to operate on a knife edge.
The context in which Health IT is operated is sometimes referred to as a socio-technical environment and an understanding of this is important as scenarios which lead to harm evolve from relationships within that ecosystem. Adverse incidents are nearly always multifaceted and involve the interplay of historical design decisions, operating procedures, system architecture, training strategies and many other factors.
In 2010 Sittig and Singh suggested a socio-technical model proposing a set of discrete elements which collectively contribute to the design, deployment and operation of any Health IT system. The components include:
• Hardware and software –the physical infrastructure, computer code, user interfaces, operating system and other software dependencies.
• People –the full set of stakeholders involved in designing, implementing and operating the system along with their collective knowledge.
• Workflow – the clinical business processes in which users operate in order to deliver care as well as the procedures to enter, process and retrieve data from the system.
• Organisation – the decisions, policies and rules which an organisation sets in delivering care and implementing Health IT systems.
• External influences – the wider political, cultural, economic and regulatory environment in which stakeholders operate.
Importantly the model should make us think differently about how we mitigate risk – once we acknowledge that risk arises from diverse sources we have a wider repertoire to effect its control. When a clinician selects an incorrect medication from an overly lengthy list is it appropriate to manage this through the blunt instruments of disciplinary action and vendor blame? Strategic options open to us might include:
• An examination of the specificity of the original system requirements
• Asking whether a user-centred design process is being truly employed
• Considering the extent and adequacy of system testing
• An analysis of whether training materials adequately point out human factor controls
• An assessment of whether the system business processes truly reflect real-world workflow
• Determining whether senior management sufficiently free clinicians’ time to support the system’s design and configuration
Only at this level of analysis can we learn, grow and better mitigate risk. Safety emerges not by accident but from a careful evaluation of all the ecosystem’s components and complex interactions. Only when these are acknowledged and characterised can we begin to tackle the underlying causes of risk in Health IT systems.