Tag Archives: security

Integrity in collaborative IT systems: Part 1 – the concept of dependability

Recently I’ve been looking at collaborative decision-making in mental health, with the aim of identifying the technology requirements to support shared decision-making. Details of this project are here). One conclusion is that the underlying IT infrastructure needs to be considered, and in particular its reliability.

In general, a collaborative IT system can be understood as a distributed system with a particular purpose, where users with different roles collaborate to achieve a common goal. Examples include university research collaboration, public transport and e-government. In the example of health IT, a medical practice might have an IT system where a patient makes an appointment, medical records are inspected and updated, treatment decisions are made and recorded, and the patient may be referred to a specialist.

IT resilience and dependability
The resilience of an IT system is its capability to satisfy service requirements if some of its components fail or are changed. If parts of the system fail due to faults, design errors or cyber-attack, the system continues to deliver the required services. Similarly, if a software update is made, the system services should not be adversely affected. Resilience is an important aspect of dependability, which is defined precisely in terms of availability, reliability, safety, security and maintainability [Avizienis et al. 2004]. Importantly, dependability is not just about resilience, but also about trust and integrity.

IT dependability is usually understood on a technical level (the network or the software) and does not consider the design of the organisation (for example, if an error occurs due to lack of training).

Organisational resilience and dependability
Just as an IT system can be resilient on a technical level, an organisation (such as a health provider) can also be resilient and dependable in meeting high-level organisational requirements. Organisational requirements are defined in terms of an organisation, and are independent of IT. For example, they may be defined in terms of business processes or workflows. I think the idea of dependability requirements for an organisation is also useful and these may be specified separately. In healthcare, they might include the following:

  • implementation – ensure that agreed decisions are actually carried out.
  • avoidance of error – e.g. avoid excessive workloads.
  • timeliness (e.g. for cancer diagnosis)
  • transparency – e.g. is there an audit trail of critical decisions and actions?
  • accountability – e.g. is it possible to challenge decisions?

Technology can help to ensure that these dependability requirements are satisfied. For example, excessive workload may be detectable by automated monitoring (e.g. one person doing too many tasks) in the same way that technical faults or security violations can be detected.

In Part 2, I will discuss the need for a test and simulation environment.

References
[Avizienis et al. 2004] Avizienis A, Laprie J-C, Randell B, and Landwehr C, “Basic concepts and taxonomy of dependable and secure computing,” IEEE Transactions on Dependable and Secure Computing, vol. 1, no. 1, pp. 11-33, Jan.-March 2004.

Ownership of Health Data

I’ve been thinking about ideas for the upcoming HealthHack (nwhealthhack.com). In addition to participatory design (see last post), I’m also interested in transparency and accountability of eHealth infrastructure. Health apps and devices often record real-time data.  Examples include “ecological momentary interventions” that ask patients how they are feeling, and smart sensing devices that transmit data on activity or physiological states.

If I am using a device that produces real-time data, I would like an app that can provide the following information:
(a) What is happening to the data produced by the device? Where does it go, and where is it stored? Which service providers are involved? What are the estimated risks to integrity and privacy in each case?
(b) Which humans can see the data and why? What decisions can they make?
(c) How is the data processed? What algorithms are applied to the data and why? E.g. visualisation, decision support. In each case, what are the risks of error?

Some important points:
1. This is not only about data, but also about processes and organisations.
2. It’s not just about privacy, but also about integrity and reliability.
3. The client or patient need not understand the information in detail, but they may consult an independent expert who can understand it – just as with open source software.
4. Ideally we need modelling on multiple levels of abstraction (e.g. a component can be a secure wireless connection, or it can be an algorithm).

Although this requires some challenging modelling, I think we can start to make the first steps by tracking the data, showing where it is going, and what algorithms or organisations are using it. The next challenge would be ensuring that only acceptable things are happening. More on this later…