Tag Archives: mHealth

Smart Assistance for Lifestyle Decisions

Smart assistance technology helps users to achieve goals using informative messages such as recommendations or reminders. To be “smart” the messages need to be sensitive to the user’s changing context, including their current knowledge and perceptions, their mood and environment.

Right now, I’m starting to explore the feasibility of a “smart assistance” app to support users with everyday decisions, for example with food, shopping, finance or time-management. When stressed and overloaded with information, we tend to be influenced easily towards decisions we would disagree with if we had the right information and time to think. Although goals may be set, they are pushed aside by other pressures that are more short-term. Poor decisions cause poor health (and also poor mental health and other negative outcomes).

The project is funded by an UnLtd small grant through the University of Manchester Social Enterprise innovation initiative:
http://umip.com/umip-awards-social-enterprise-innovation-create-buzz-manchester/.

Why is this new?

There are lots of apps out there (see for example http://www.businessinsider.com/functionality-is-key-to-the-future-of-health-apps-2014-6), but most are targeted towards individuals as “consumers”, and do not address the more challenging psychological and social problems. Usually they are just addressing a single issue in isolation, such as exercise. Although Google and Apple have started to develop “health platforms” (http://www.businessinsider.com/google-fit-apple-healthkit-2014-6) they are mostly concerned with integrated sensors and tracking of physical health, not with psychological health.

Some initial requirements

These are the broad requirements that I am starting with:

  1. Citizen led: the users should be in control of both the technology and the process by which they improve their decisions and action.
  2. Promoting reflection and reasoning: reflection and reasoning are high-level cognitive processes that can put the user in control because they involve awareness, deliberate goal setting and planning. This contrasts with behavioural “nudging” which affects decisions unconsciously and have been criticised as unethical (http://www.newscientist.com/article/mg21228376.500-nudge-policies-are-another-name-for-coercion.html#.U-ZXsvldVmU). For this reason I think the app should be called a “cognitive assistance” app.
  3. Promoting social support: Decisions are influenced by the wider social context. Therefore the app should not just be helping individuals in isolation, but must take account of their social environment and promote social support.

Some technical and research challenges

One way to help people resist pressures to make bad decisions is to raise awareness of significant and relevant information (such as why the original goals were set in the first place) and to encourage reasoning. In particular, this might include the following:

  • Smart prompts and reminders – sensitive to the person’s mood and circumstances. Users could also be prompted to send encouragement to others when they may need it.
  • Visualisations to draw attention to the important things, when a user is overloaded with information and options.
  • A kind of dialogue system to help with reasoning (not necessarily using text)

Getting these right is extremely challenging. In particular, the app must be sensitive towards the user’s state of mind and changing context. Smart sensor technology can help with this. (See for example, the MindTech project at http://www.mindtech.org.uk/technologies-overview.html). The challenge that particularly interests me is the modelling of the user’s mental attitudes so that the reminders are not disruptive or insensitive. To support reasoning, some principles of online Cognitive Behavioural Therapy (http://www.nice.org.uk/Guidance/ta97) may be applied. In future posts I will discuss the role of theories and models, as well as the participatory software design process.

Ownership of Health Data

I’ve been thinking about ideas for the upcoming HealthHack (nwhealthhack.com). In addition to participatory design (see last post), I’m also interested in transparency and accountability of eHealth infrastructure. Health apps and devices often record real-time data.  Examples include “ecological momentary interventions” that ask patients how they are feeling, and smart sensing devices that transmit data on activity or physiological states.

If I am using a device that produces real-time data, I would like an app that can provide the following information:
(a) What is happening to the data produced by the device? Where does it go, and where is it stored? Which service providers are involved? What are the estimated risks to integrity and privacy in each case?
(b) Which humans can see the data and why? What decisions can they make?
(c) How is the data processed? What algorithms are applied to the data and why? E.g. visualisation, decision support. In each case, what are the risks of error?

Some important points:
1. This is not only about data, but also about processes and organisations.
2. It’s not just about privacy, but also about integrity and reliability.
3. The client or patient need not understand the information in detail, but they may consult an independent expert who can understand it – just as with open source software.
4. Ideally we need modelling on multiple levels of abstraction (e.g. a component can be a secure wireless connection, or it can be an algorithm).

Although this requires some challenging modelling, I think we can start to make the first steps by tracking the data, showing where it is going, and what algorithms or organisations are using it. The next challenge would be ensuring that only acceptable things are happening. More on this later…