Tag Archives: transparency

Metacognition: Part 1 – reasoning and learning

My research in cognitive systems is focused on metacognition (“thinking about thinking”). In this post, I will summarise some of its key features and briefly discuss some examples in the context of reasoning and learning, both for humans and AI systems.

In psychology, metacognition involves introspective monitoring of our reasoning and mental experiences, as well as the ability to control or adjust our thinking. Monitoring includes questions such as: “Have I made the right decision, or are there some other issues that I need to consider?” Control includes making decisions about what to focus our attention on, or what mental strategies to use for problem-solving. In education, metacognitive strategies include the learning of new concepts by connecting them to familiar concepts (e.g. using mind maps).

Metacognition also includes awareness of emotions and how they might affect learning and decisions. I will talk about this in part 2.

Application to AI systems
Some principles of metacognition can be applied to AI systems, such as robots or automated decision systems. The architecture of such systems is usually divided into two levels:
  • Object-level: solving the problem (e.g. route planning, medical diagnosis).
  • Meta-level: reasoning about the methods used to solve the problem (e.g. algorithms, representations).
The term “meta-reasoning” is often used for these systems. A key feature is transparent reasoning and explanation (see e.g. [Cox 2011]). The term “reasoning” can include a wide range of problem-solving techniques which can happen on the meta-level or object-level. Metacognition happens on the “meta-level” and can be divided into two processes:
  • Meta-level monitoring: monitor progress and recognise problems in object-level methods
  • Meta-level control: make adjustments to object-level methods.

Correcting mistakes in reasoning
Metacognition is often used in everyday situations when things go wrong. For example, if a hill walker is following a route and finds that a landmark has not appeared when expected, they may ask: “Did I make a navigation error?” or “is my route correct but am I overestimating how fast I am going?”. These questions are metacognitive because they are attempting to diagnose mistakes in reasoning, such as navigation errors or progress overestimation. In contrast, the hillwalker’s non-metacognitive reasoning solves problems in the outside world, such as determining the current location and planning a route to the destination.

In a similar way, a robot might detect problems in its automated planning or navigation. For example, it could use an algorithm to predict the duration of a route, or it might have learned to recognise typical landmarks. If the route is unusual, unexpected events can occur, such as a landmark failing to appear. Such a recognition of unexpectedness is part of meta-level monitoring. The robot could respond by recalculating its position and re-planning its route, or it could just ask for assistance. This would involve a minimal level of meta-level control (e.g. initiating new algorithms and stopping current ones). A more complex form of meta-level control would involve the robot making a decision on “what can be learned” from the failure. It could identify specific features of the route that were different from the type of route that it has learned about, and use the information to generate a new learning goal, along with a learning plan. The concept of generating learning goals in AI has been around for some time (See for example [Cox and Ram 1999] and [Radhakrishnan et al. 2009]).

If the robot can reason about its mistakes, explain them and take autonomous corrective action (even if it is just deciding that it needs help), it may be considered to be metacognitive.

Metacognition is not just about mistakes in reasoning. It may also be about self-regulation. In Part 2, I will talk about this kind of metacognition.

References
  • [Cox 2011] Cox, M. T. Metareasoning, Monitoring and Self-Explanation. In: Cox, M. T. and Raja, A. (eds.) Metareasoning: Thinking about Thinking, pp 3–14, MIT Press (2011).
  • [Cox and Ram 1999] Cox M. T. and Ram, A. Introspective multi-strategy learning: On the construction of learning strategies. Artificial intelligence, 1–55(112), 1999.
  • [Radhakrishnan et al. 2009] Radhakrishnan, J., Ontanon, S. and Ram, A. Goal-Driven Learning in the GILA Integrated Intelligence Architecture. International Joint Conference in Artificial Intelligence (IJCAI 2009).

Integrity in collaborative IT systems: Part 1 – the concept of dependability

Recently I’ve been looking at collaborative decision-making in mental health, with the aim of identifying the technology requirements to support shared decision-making. Details of this project are here). One conclusion is that the underlying IT infrastructure needs to be considered, and in particular its reliability.

In general, a collaborative IT system can be understood as a distributed system with a particular purpose, where users with different roles collaborate to achieve a common goal. Examples include university research collaboration, public transport and e-government. In the example of health IT, a medical practice might have an IT system where a patient makes an appointment, medical records are inspected and updated, treatment decisions are made and recorded, and the patient may be referred to a specialist.

IT resilience and dependability
The resilience of an IT system is its capability to satisfy service requirements if some of its components fail or are changed. If parts of the system fail due to faults, design errors or cyber-attack, the system continues to deliver the required services. Similarly, if a software update is made, the system services should not be adversely affected. Resilience is an important aspect of dependability, which is defined precisely in terms of availability, reliability, safety, security and maintainability [Avizienis et al. 2004]. Importantly, dependability is not just about resilience, but also about trust and integrity.

IT dependability is usually understood on a technical level (the network or the software) and does not consider the design of the organisation (for example, if an error occurs due to lack of training).

Organisational resilience and dependability
Just as an IT system can be resilient on a technical level, an organisation (such as a health provider) can also be resilient and dependable in meeting high-level organisational requirements. Organisational requirements are defined in terms of an organisation, and are independent of IT. For example, they may be defined in terms of business processes or workflows. I think the idea of dependability requirements for an organisation is also useful and these may be specified separately. In healthcare, they might include the following:

  • implementation – ensure that agreed decisions are actually carried out.
  • avoidance of error – e.g. avoid excessive workloads.
  • timeliness (e.g. for cancer diagnosis)
  • transparency – e.g. is there an audit trail of critical decisions and actions?
  • accountability – e.g. is it possible to challenge decisions?

Technology can help to ensure that these dependability requirements are satisfied. For example, excessive workload may be detectable by automated monitoring (e.g. one person doing too many tasks) in the same way that technical faults or security violations can be detected.

In Part 2, I will discuss the need for a test and simulation environment.

References
[Avizienis et al. 2004] Avizienis A, Laprie J-C, Randell B, and Landwehr C, “Basic concepts and taxonomy of dependable and secure computing,” IEEE Transactions on Dependable and Secure Computing, vol. 1, no. 1, pp. 11-33, Jan.-March 2004.