Tag Archives: reliability

Metacognition: Part 2 – self-regulation

The previous post on metacognition (part 1) compared metacognition in humans with AI agents. Key concepts introduced were meta-level monitoring and control. The main focus was on detecting mistakes in reasoning and gaps in knowledge. This post (part 2) will argue that metacognition is also important in ensuring that requirements are met in the presence of conflicting pressures. For humans, this often called “self-regulation”.

Two kinds of thinking
The role of self-regulation is best understood within the larger context of decision-making processes. Human cognition is often described as two separate kinds of thinking called “system 1” and “system 2” (Kahneman[1]). System 1 responds quickly to events, but can be biased. System 2 is slower and more effortful, but is good at reasoning. An important property of system 2 is that it can generate hypothetical “what-if” scenarios. In contrast, system 1 only sees information that is immediately available.

Emotions and affective states are closely associated with system 1. This is particularly true of the effects of emotion on cognition. However, system 2 may be involved in generating emotions (such as fear caused by reasoning about hypothetical states). Metacognition is usually associated with system 2. The two kinds of thinking are intended as useful concepts only, and do not correspond to parts of the brain.

Computational models
In computational models of human cognition, metacognition is often represented as an additional level of processing which monitors and controls other components in the architecture, such as perception, learning, reasoning, and planning. An example is H-CogAff (described earlier in https://catmkennedy.com/2020/01/09/what-is-a-cognitive-architecture/). In H-CogAff, the reactive layer is similar to system 1 while the deliberative layer approximates system 2. The metacognition layer monitors and adjusts deliberative and reactive processing. (In H-CogAff, the reactive layer represents an older part of the brain than the deliberative layer, meaning that these layers do not correspond exactly to “system 1” and “system 2”, but their similarities are still important).


In the same way as for cognitive models, applied AI agents can have a hybrid architecture with a reactive and deliberative layer. Deliberation enables the agent to plan in advance while reactivity ensures that it can respond quickly to unexpected events. In this case, the purpose of a hybrid architecture is not to simulate human or animal cognition, but to add useful design features to a real-world system. Metacognition (a meta-level) can be added to monitor and control the reactive and deliberative layers (both of which are “object-level”).

Human self-regulation
For humans, system 1 reacts quickly, but not always in a way that is consistent with our goals or values. So we take corrective action (in psychology this is called “self-regulation”). Examples include:
  • Resisting distractions
  • Healthy eating (e.g. resisting cake)
  • Emotion regulation
The first two are about resisting pressures. Emotion regulation is more complex and includes strategies for re-interpreting the meaning of situations that cause emotions as well as strategies for modifying the emotional response itself. (See for example [2], which reviews emotion regulation theories). Some of my research involves computational modelling of emotion regulation [3].

Agent self-regulation
An AI agent can also have a self-regulation capability. For example, if the environment is unpredictable, it may be necessary to react to potentially dangerous events quickly. But if it is spending too much time reacting to minor events, this can cause a “distraction” problem and prevent a goal from being satisfied within the required time. To solve this problem, the agent must first detect the “distraction” (meta-level monitoring) and then make adjustments to its sensitivity to interruptions (meta-level control). It might reconfigure its priorities so that minor events can be ignored. Meta-level control may also generate learning goals, such as identifying what events are the most time-wasting or what kind of “system 1” reactions should be suppressed.

Other self-regulation scenarios exists. For example, an AI agent that makes decisions in safety-critical scenarios could monitor the integrity of critical software that it is relying on (e.g. providing sensor data) and re-configure or replace faulty components as necessary.

Some “self-adaptive” software architectures [4] have the foundations of self-regulation and could be called “meta-reasoning” if they include explicit reasoning and explanation about problems they have detected and corrections they are making.

In later blog posts, I plan to discuss the role of metacognition in ethical reasoning.

References
  1. Kahneman, D. Thinking Fast and Slow. Farrar, Straus and Giroux (2011)
  2. Kobylińska D. and Kusev P. Flexible Emotion Regulation: How Situational Demands and Individual Differences Influence the Effectiveness of Regulatory Strategies. Frontiers in Psychology Volume 10, Article 72, 2019. https://www.frontiersin.org/articles/10.3389/fpsyg.2019.00072/full
  3. Kennedy, C. M. Computational Modelling of Metacognition in Emotion Regulation. In 8th Workshop on Emotion and Computing at the German Conference on AI (KI-2018), Berlin, Germany, (2018). https://www.cs.bham.ac.uk/~cmk/emotion-regulation-metacognition.pdf
  4. Macías-Escrivá, F., Haber, R., del Toro, R., and Hernandez, V. Self-adaptive systems: A survey of current approaches, research challenges and applications. Expert Systems with Applications, Volume 40, Issue 18, 2013. https://www.sciencedirect.com/science/article/pii/S0957417413005125.

Integrity in collaborative IT systems: Part 2 – the need for rich test environments

In Part 1, I argued that dependability as a concept might be applied to organisations as well as to technical systems. In this post I will argue that both the organisational and technical levels should be modelled together as an interconnected system, and that test environments for dependability should include the simulation of organisational problems as well as technical problems.

Socio-technical stack
Higher level organisational requirements cannot be considered in isolation from the underlying IT requirements. Organisational and IT system problems can interact in complex ways and such problems are common in real-world organisations. Therefore, these different levels need to be considered together. Such a multi-level system can be viewed as a socio-technical stack [Baxter & Sommerville 2011].

The different levels of requirements can be listed as follows:

  1. Specific organisational functionality requirements (e.g. medical workflows)
  2. Organisational dependability requirements (e.g. avoiding error)
  3. Specific IT requirements for the organisation (resources, networks etc.)
  4. IT dependability requirements (availability, security etc.)

Dependability requirements (2 and 4) may be more generic than 1 and 3. For example, all organisations will want to reduce error, but they may have different measures of what is acceptable. Requirements 3 and 4 can usually be satisfied by off-the-shelf components (but would need to be configured).

We assume that the software to satisfy the first set of requirements (1) has multiple users with different services. Such software is often called “enterprise application software”. In a health care system, users can be patients, clinicians or administrators. They access their own services in the system and they have specific actions available to them at particular stages in their workflow. For example, a patient could review their details or access records following a consultation. A clinician could request a test or respond to a symptom update from a patient.

Need for a test environment with simulation
To improve organisational resilience and dependability, it is important to develop new methods for detection and correction of organisational problems. To test these problem detection and recovery methods, it is useful to run simulated scenarios where human mistakes and IT failures can occur together. “Simulations” might involve people participating (as in a kind of role-playing game) or simulated computational agents [Macal 2016].

Examples of failure that might be simulated:

  • mistakes (e.g. choosing the wrong test)
  • administration failure: patient receives no response to a request (which should have a time limit).
  • software failure: e.g. data interoperability issues.
  • malware
  • hardware failure

A test environment needs to be coupled with the iterative development of the system being tested. This would involve the development of increasingly complex problem-detection software in parallel with increasingly challenging scenarios. For example, the first version might involve simple errors that are easy to detect. Subsequent stages might involve increasingly more detailed work scenarios with more complex errors or failures. The more advanced stages might also involve real users in different roles (e.g. nursing students, medical students) and include time pressure.

Importance of agile and participatory design
In addition to developing safe systems, changing them safely is also important. So the development and test methodology needs to include change management. Agile software engineering is particularly important here, along with participatory design (co-design) methods. Ideally the system would be co-designed iteratively by the different users as they become aware of error-prone situations (such as cognitive overload) while participating in the evaluations. Design needs be informed by cognitive science as well as computer science.

In later posts, I plan to talk about the role of AI and decision support in organisational dependability.

References: