Author Archives: catmkennedy

Integrity in collaborative IT systems: Part 2 – the need for rich test environments

In Part 1, I argued that dependability as a concept might be applied to organisations as well as to technical systems. In this post I will argue that both the organisational and technical levels should be modelled together as an interconnected system, and that test environments for dependability should include the simulation of organisational problems as well as technical problems.

Socio-technical stack
Higher level organisational requirements cannot be considered in isolation from the underlying IT requirements. Organisational and IT system problems can interact in complex ways and such problems are common in real-world organisations. Therefore, these different levels need to be considered together. Such a multi-level system can be viewed as a socio-technical stack [Baxter & Sommerville 2011].

The different levels of requirements can be listed as follows:

  1. Specific organisational functionality requirements (e.g. medical workflows)
  2. Organisational dependability requirements (e.g. avoiding error)
  3. Specific IT requirements for the organisation (resources, networks etc.)
  4. IT dependability requirements (availability, security etc.)

Dependability requirements (2 and 4) may be more generic than 1 and 3. For example, all organisations will want to reduce error, but they may have different measures of what is acceptable. Requirements 3 and 4 can usually be satisfied by off-the-shelf components (but would need to be configured).

We assume that the software to satisfy the first set of requirements (1) has multiple users with different services. Such software is often called “enterprise application software”. In a health care system, users can be patients, clinicians or administrators. They access their own services in the system and they have specific actions available to them at particular stages in their workflow. For example, a patient could review their details or access records following a consultation. A clinician could request a test or respond to a symptom update from a patient.

Need for a test environment with simulation
To improve organisational resilience and dependability, it is important to develop new methods for detection and correction of organisational problems. To test these problem detection and recovery methods, it is useful to run simulated scenarios where human mistakes and IT failures can occur together. “Simulations” might involve people participating (as in a kind of role-playing game) or simulated computational agents [Macal 2016].

Examples of failure that might be simulated:

  • mistakes (e.g. choosing the wrong test)
  • administration failure: patient receives no response to a request (which should have a time limit).
  • software failure: e.g. data interoperability issues.
  • malware
  • hardware failure

A test environment needs to be coupled with the iterative development of the system being tested. This would involve the development of increasingly complex problem-detection software in parallel with increasingly challenging scenarios. For example, the first version might involve simple errors that are easy to detect. Subsequent stages might involve increasingly more detailed work scenarios with more complex errors or failures. The more advanced stages might also involve real users in different roles (e.g. nursing students, medical students) and include time pressure.

Importance of agile and participatory design
In addition to developing safe systems, changing them safely is also important. So the development and test methodology needs to include change management. Agile software engineering is particularly important here, along with participatory design (co-design) methods. Ideally the system would be co-designed iteratively by the different users as they become aware of error-prone situations (such as cognitive overload) while participating in the evaluations. Design needs be informed by cognitive science as well as computer science.

In later posts, I plan to talk about the role of AI and decision support in organisational dependability.

References:

Integrity in collaborative IT systems: Part 1 – the concept of dependability

Recently I’ve been looking at collaborative decision-making in mental health, with the aim of identifying the technology requirements to support shared decision-making. Details of this project are here). One conclusion is that the underlying IT infrastructure needs to be considered, and in particular its reliability.

In general, a collaborative IT system can be understood as a distributed system with a particular purpose, where users with different roles collaborate to achieve a common goal. Examples include university research collaboration, public transport and e-government. In the example of health IT, a medical practice might have an IT system where a patient makes an appointment, medical records are inspected and updated, treatment decisions are made and recorded, and the patient may be referred to a specialist.

IT resilience and dependability
The resilience of an IT system is its capability to satisfy service requirements if some of its components fail or are changed. If parts of the system fail due to faults, design errors or cyber-attack, the system continues to deliver the required services. Similarly, if a software update is made, the system services should not be adversely affected. Resilience is an important aspect of dependability, which is defined precisely in terms of availability, reliability, safety, security and maintainability [Avizienis et al. 2004]. Importantly, dependability is not just about resilience, but also about trust and integrity.

IT dependability is usually understood on a technical level (the network or the software) and does not consider the design of the organisation (for example, if an error occurs due to lack of training).

Organisational resilience and dependability
Just as an IT system can be resilient on a technical level, an organisation (such as a health provider) can also be resilient and dependable in meeting high-level organisational requirements. Organisational requirements are defined in terms of an organisation, and are independent of IT. For example, they may be defined in terms of business processes or workflows. I think the idea of dependability requirements for an organisation is also useful and these may be specified separately. In healthcare, they might include the following:

  • implementation – ensure that agreed decisions are actually carried out.
  • avoidance of error – e.g. avoid excessive workloads.
  • timeliness (e.g. for cancer diagnosis)
  • transparency – e.g. is there an audit trail of critical decisions and actions?
  • accountability – e.g. is it possible to challenge decisions?

Technology can help to ensure that these dependability requirements are satisfied. For example, excessive workload may be detectable by automated monitoring (e.g. one person doing too many tasks) in the same way that technical faults or security violations can be detected.

In Part 2, I will discuss the need for a test and simulation environment.

References
[Avizienis et al. 2004] Avizienis A, Laprie J-C, Randell B, and Landwehr C, “Basic concepts and taxonomy of dependable and secure computing,” IEEE Transactions on Dependable and Secure Computing, vol. 1, no. 1, pp. 11-33, Jan.-March 2004.

Representing user perspectives in IT systems

Taking into account user perspectives when designing technology means that the technology should fit around their concerns and perceptions. This process is generally called “user-centered design” (see for example: http://en.wikipedia.org/wiki/User-centered_design). Why is this important? It can lead to more simplicity, fewer errors and more satisfied users. There are other advantages that go deeper – such as gaining new insights into what users really require.

The key components of a perspective can be described as follows:

Concepts: how do we describe and visualise the world?  For example, when we prepare documents online, we think of objects such as text and diagrams. These are the concepts. They also include applicable operations (create, edit, save etc.) and the workflow of producing a document. Similarly, when we look up an online map, the concepts include streets, buildings, green spaces etc. Going shopping among physical shops also involves a workflow with applicable operations.

Concerns: what kind of things are important? (including goals, values etc). For example, academics are often concerned about collaborative documents being easy to produce and manage, as well as meeting paper submission deadlines. A user with low literacy may be concerned about completing an online form correctly without assistance. Differing concerns cause key concepts also to differ. For people with mobility restriction, concepts such as “easy walking area” or “slow traffic” on a map may be key, while drivers with busy schedules might look for “fast traffic”.

Representing user perspectives in IT goes way beyond usability or UX (although that is important). In my view, it should satisfy the following requirements:

1. It should be about the actual actions of the IT infrastructure the user is depending on to satisfy their goals. To what extent are the user concerns actually guiding the infrastructure resource allocation and priorities? This is particularly an issue with healthcare systems and privacy.

2. The infrastructure that the user depends on should be transparent and accountable. The visibility should be adjustable depending on the perspective of the particular user.

3. It’s not just about end-users or customers, but also about staff roles within an organisation. For example, a system administrator will have different concepts and concerns from those of a software developer (although both roles now increasingly interact together in the field of “DevOps”). So “the things that matter” then include: “does this help me to do my job effectively? does it cause more complexity and stress? does it reduce errors or create more potential for errors? does it support creativity?”

4. It’s not just about design; it’s also about models based on a user’s perspective. Models allow automated decision-making and prediction. Examples include statistical predictions based on user preferences. These could be said to represent some aspects of a user’s perspective. But I am thinking here particularly about qualitative models. These are models that represent the user’s concepts and concerns in a symbolic language. The language must be human-readable and machine-readable. These topics take us into the field of knowledge-based systems, which I will talk about in a later post.

Smart Assistance for Lifestyle Decisions

Smart assistance technology helps users to achieve goals using informative messages such as recommendations or reminders. To be “smart” the messages need to be sensitive to the user’s changing context, including their current knowledge and perceptions, their mood and environment.

Right now, I’m starting to explore the feasibility of a “smart assistance” app to support users with everyday decisions, for example with food, shopping, finance or time-management. When stressed and overloaded with information, we tend to be influenced easily towards decisions we would disagree with if we had the right information and time to think. Although goals may be set, they are pushed aside by other pressures that are more short-term. Poor decisions cause poor health (and also poor mental health and other negative outcomes).

The project is funded by an UnLtd small grant through the University of Manchester Social Enterprise innovation initiative:
http://umip.com/umip-awards-social-enterprise-innovation-create-buzz-manchester/.

Why is this new?

There are lots of apps out there (see for example http://www.businessinsider.com/functionality-is-key-to-the-future-of-health-apps-2014-6), but most are targeted towards individuals as “consumers”, and do not address the more challenging psychological and social problems. Usually they are just addressing a single issue in isolation, such as exercise. Although Google and Apple have started to develop “health platforms” (http://www.businessinsider.com/google-fit-apple-healthkit-2014-6) they are mostly concerned with integrated sensors and tracking of physical health, not with psychological health.

Some initial requirements

These are the broad requirements that I am starting with:

  1. Citizen led: the users should be in control of both the technology and the process by which they improve their decisions and action.
  2. Promoting reflection and reasoning: reflection and reasoning are high-level cognitive processes that can put the user in control because they involve awareness, deliberate goal setting and planning. This contrasts with behavioural “nudging” which affects decisions unconsciously and have been criticised as unethical (http://www.newscientist.com/article/mg21228376.500-nudge-policies-are-another-name-for-coercion.html#.U-ZXsvldVmU). For this reason I think the app should be called a “cognitive assistance” app.
  3. Promoting social support: Decisions are influenced by the wider social context. Therefore the app should not just be helping individuals in isolation, but must take account of their social environment and promote social support.

Some technical and research challenges

One way to help people resist pressures to make bad decisions is to raise awareness of significant and relevant information (such as why the original goals were set in the first place) and to encourage reasoning. In particular, this might include the following:

  • Smart prompts and reminders – sensitive to the person’s mood and circumstances. Users could also be prompted to send encouragement to others when they may need it.
  • Visualisations to draw attention to the important things, when a user is overloaded with information and options.
  • A kind of dialogue system to help with reasoning (not necessarily using text)

Getting these right is extremely challenging. In particular, the app must be sensitive towards the user’s state of mind and changing context. Smart sensor technology can help with this. (See for example, the MindTech project at http://www.mindtech.org.uk/technologies-overview.html). The challenge that particularly interests me is the modelling of the user’s mental attitudes so that the reminders are not disruptive or insensitive. To support reasoning, some principles of online Cognitive Behavioural Therapy (http://www.nice.org.uk/Guidance/ta97) may be applied. In future posts I will discuss the role of theories and models, as well as the participatory software design process.

Ownership of Health Data

I’ve been thinking about ideas for the upcoming HealthHack (nwhealthhack.com). In addition to participatory design (see last post), I’m also interested in transparency and accountability of eHealth infrastructure. Health apps and devices often record real-time data.  Examples include “ecological momentary interventions” that ask patients how they are feeling, and smart sensing devices that transmit data on activity or physiological states.

If I am using a device that produces real-time data, I would like an app that can provide the following information:
(a) What is happening to the data produced by the device? Where does it go, and where is it stored? Which service providers are involved? What are the estimated risks to integrity and privacy in each case?
(b) Which humans can see the data and why? What decisions can they make?
(c) How is the data processed? What algorithms are applied to the data and why? E.g. visualisation, decision support. In each case, what are the risks of error?

Some important points:
1. This is not only about data, but also about processes and organisations.
2. It’s not just about privacy, but also about integrity and reliability.
3. The client or patient need not understand the information in detail, but they may consult an independent expert who can understand it – just as with open source software.
4. Ideally we need modelling on multiple levels of abstraction (e.g. a component can be a secure wireless connection, or it can be an algorithm).

Although this requires some challenging modelling, I think we can start to make the first steps by tracking the data, showing where it is going, and what algorithms or organisations are using it. The next challenge would be ensuring that only acceptable things are happening. More on this later…

Some explorations in Javascript

To get familiar with javascript best practices, I started a small project to improve user control of a web experience. I often get annoyed at the way some websites assume a certain cognitive style and don’t let me change anything (apart from maybe the font size for the whole page). So I put some code on GitHub at: https://github.com/CatMKennedy/UserControl. This is currently very simple; it provides some hotkeys for changing the font size or colour of a section of text. For example, a user might want to mark some part of the text as important, or minimise/delete another section as irrelevant.

As a longer term aim, I support the idea of helping users to participate in the design and personalisation of their web or mobile experience. This is not just about the surface “look and feel”, but also about the underlying architecture: what kind of information is considered important, and how is it presented? An example might be a health advice application that adapts to the concepts and experiences of the patient. This is the idea behind “Health 2.0” which I have recently started to find out more about.