Tag Archives: automation

Metacognition: Part 1 – reasoning and learning

My research in cognitive systems is focused on metacognition (“thinking about thinking”). In this post, I will summarise some of its key features and briefly discuss some examples in the context of reasoning and learning, both for humans and AI systems.

In psychology, metacognition involves introspective monitoring of our reasoning and mental experiences, as well as the ability to control or adjust our thinking. Monitoring includes questions such as: “Have I made the right decision, or are there some other issues that I need to consider?” Control includes making decisions about what to focus our attention on, or what mental strategies to use for problem-solving. In education, metacognitive strategies include the learning of new concepts by connecting them to familiar concepts (e.g. using mind maps).

Metacognition also includes awareness of emotions and how they might affect learning and decisions. I will talk about this in part 2.

Application to AI systems
Some principles of metacognition can be applied to AI systems, such as robots or automated decision systems. The architecture of such systems is usually divided into two levels:
  • Object-level: solving the problem (e.g. route planning, medical diagnosis).
  • Meta-level: reasoning about the methods used to solve the problem (e.g. algorithms, representations).
The term “meta-reasoning” is often used for these systems. A key feature is transparent reasoning and explanation (see e.g. [Cox 2011]). The term “reasoning” can include a wide range of problem-solving techniques which can happen on the meta-level or object-level. Metacognition happens on the “meta-level” and can be divided into two processes:
  • Meta-level monitoring: monitor progress and recognise problems in object-level methods
  • Meta-level control: make adjustments to object-level methods.

Correcting mistakes in reasoning
Metacognition is often used in everyday situations when things go wrong. For example, if a hill walker is following a route and finds that a landmark has not appeared when expected, they may ask: “Did I make a navigation error?” or “is my route correct but am I overestimating how fast I am going?”. These questions are metacognitive because they are attempting to diagnose mistakes in reasoning, such as navigation errors or progress overestimation. In contrast, the hillwalker’s non-metacognitive reasoning solves problems in the outside world, such as determining the current location and planning a route to the destination.

In a similar way, a robot might detect problems in its automated planning or navigation. For example, it could use an algorithm to predict the duration of a route, or it might have learned to recognise typical landmarks. If the route is unusual, unexpected events can occur, such as a landmark failing to appear. Such a recognition of unexpectedness is part of meta-level monitoring. The robot could respond by recalculating its position and re-planning its route, or it could just ask for assistance. This would involve a minimal level of meta-level control (e.g. initiating new algorithms and stopping current ones). A more complex form of meta-level control would involve the robot making a decision on “what can be learned” from the failure. It could identify specific features of the route that were different from the type of route that it has learned about, and use the information to generate a new learning goal, along with a learning plan. The concept of generating learning goals in AI has been around for some time (See for example [Cox and Ram 1999] and [Radhakrishnan et al. 2009]).

If the robot can reason about its mistakes, explain them and take autonomous corrective action (even if it is just deciding that it needs help), it may be considered to be metacognitive.

Metacognition is not just about mistakes in reasoning. It may also be about self-regulation. In Part 2, I will talk about this kind of metacognition.

  • [Cox 2011] Cox, M. T. Metareasoning, Monitoring and Self-Explanation. In: Cox, M. T. and Raja, A. (eds.) Metareasoning: Thinking about Thinking, pp 3–14, MIT Press (2011).
  • [Cox and Ram 1999] Cox M. T. and Ram, A. Introspective multi-strategy learning: On the construction of learning strategies. Artificial intelligence, 1–55(112), 1999.
  • [Radhakrishnan et al. 2009] Radhakrishnan, J., Ontanon, S. and Ram, A. Goal-Driven Learning in the GILA Integrated Intelligence Architecture. International Joint Conference in Artificial Intelligence (IJCAI 2009).

Integrity in collaborative IT systems: Part 2 – the need for rich test environments

In Part 1, I argued that dependability as a concept might be applied to organisations as well as to technical systems. In this post I will argue that both the organisational and technical levels should be modelled together as an interconnected system, and that test environments for dependability should include the simulation of organisational problems as well as technical problems.

Socio-technical stack
Higher level organisational requirements cannot be considered in isolation from the underlying IT requirements. Organisational and IT system problems can interact in complex ways and such problems are common in real-world organisations. Therefore, these different levels need to be considered together. Such a multi-level system can be viewed as a socio-technical stack [Baxter & Sommerville 2011].

The different levels of requirements can be listed as follows:

  1. Specific organisational functionality requirements (e.g. medical workflows)
  2. Organisational dependability requirements (e.g. avoiding error)
  3. Specific IT requirements for the organisation (resources, networks etc.)
  4. IT dependability requirements (availability, security etc.)

Dependability requirements (2 and 4) may be more generic than 1 and 3. For example, all organisations will want to reduce error, but they may have different measures of what is acceptable. Requirements 3 and 4 can usually be satisfied by off-the-shelf components (but would need to be configured).

We assume that the software to satisfy the first set of requirements (1) has multiple users with different services. Such software is often called “enterprise application software”. In a health care system, users can be patients, clinicians or administrators. They access their own services in the system and they have specific actions available to them at particular stages in their workflow. For example, a patient could review their details or access records following a consultation. A clinician could request a test or respond to a symptom update from a patient.

Need for a test environment with simulation
To improve organisational resilience and dependability, it is important to develop new methods for detection and correction of organisational problems. To test these problem detection and recovery methods, it is useful to run simulated scenarios where human mistakes and IT failures can occur together. “Simulations” might involve people participating (as in a kind of role-playing game) or simulated computational agents [Macal 2016].

Examples of failure that might be simulated:

  • mistakes (e.g. choosing the wrong test)
  • administration failure: patient receives no response to a request (which should have a time limit).
  • software failure: e.g. data interoperability issues.
  • malware
  • hardware failure

A test environment needs to be coupled with the iterative development of the system being tested. This would involve the development of increasingly complex problem-detection software in parallel with increasingly challenging scenarios. For example, the first version might involve simple errors that are easy to detect. Subsequent stages might involve increasingly more detailed work scenarios with more complex errors or failures. The more advanced stages might also involve real users in different roles (e.g. nursing students, medical students) and include time pressure.

Importance of agile and participatory design
In addition to developing safe systems, changing them safely is also important. So the development and test methodology needs to include change management. Agile software engineering is particularly important here, along with participatory design (co-design) methods. Ideally the system would be co-designed iteratively by the different users as they become aware of error-prone situations (such as cognitive overload) while participating in the evaluations. Design needs be informed by cognitive science as well as computer science.

In later posts, I plan to talk about the role of AI and decision support in organisational dependability.