Metacognition: Part 2 – self-regulation

The previous post on metacognition (part 1) compared metacognition in humans with AI agents. Key concepts introduced were meta-level monitoring and control. The main focus was on detecting mistakes in reasoning and gaps in knowledge. This post (part 2) will argue that metacognition is also important in ensuring that requirements are met in the presence of conflicting pressures. For humans, this often called “self-regulation”.

Two kinds of thinking
The role of self-regulation is best understood within the larger context of decision-making processes. Human cognition is often described as two separate kinds of thinking called “system 1” and “system 2” (Kahneman[1]). System 1 responds quickly to events, but can be biased. System 2 is slower and more effortful, but is good at reasoning. An important property of system 2 is that it can generate hypothetical “what-if” scenarios. In contrast, system 1 only sees information that is immediately available.

Emotions and affective states are closely associated with system 1. This is particularly true of the effects of emotion on cognition. However, system 2 may be involved in generating emotions (such as fear caused by reasoning about hypothetical states). Metacognition is usually associated with system 2. The two kinds of thinking are intended as useful concepts only, and do not correspond to parts of the brain.

Computational models
In computational models of human cognition, metacognition is often represented as an additional level of processing which monitors and controls other components in the architecture, such as perception, learning, reasoning, and planning. An example is H-CogAff (described earlier in https://catmkennedy.com/2020/01/09/what-is-a-cognitive-architecture/). In H-CogAff, the reactive layer is similar to system 1 while the deliberative layer approximates system 2. The metacognition layer monitors and adjusts deliberative and reactive processing. (In H-CogAff, the reactive layer represents an older part of the brain than the deliberative layer, meaning that these layers do not correspond exactly to “system 1” and “system 2”, but their similarities are still important).


In the same way as for cognitive models, applied AI agents can have a hybrid architecture with a reactive and deliberative layer. Deliberation enables the agent to plan in advance while reactivity ensures that it can respond quickly to unexpected events. In this case, the purpose of a hybrid architecture is not to simulate human or animal cognition, but to add useful design features to a real-world system. Metacognition (a meta-level) can be added to monitor and control the reactive and deliberative layers (both of which are “object-level”).

Human self-regulation
For humans, system 1 reacts quickly, but not always in a way that is consistent with our goals or values. So we take corrective action (in psychology this is called “self-regulation”). Examples include:
  • Resisting distractions
  • Healthy eating (e.g. resisting cake)
  • Emotion regulation
The first two are about resisting pressures. Emotion regulation is more complex and includes strategies for re-interpreting the meaning of situations that cause emotions as well as strategies for modifying the emotional response itself. (See for example [2], which reviews emotion regulation theories). Some of my research involves computational modelling of emotion regulation [3].

Agent self-regulation
An AI agent can also have a self-regulation capability. For example, if the environment is unpredictable, it may be necessary to react to potentially dangerous events quickly. But if it is spending too much time reacting to minor events, this can cause a “distraction” problem and prevent a goal from being satisfied within the required time. To solve this problem, the agent must first detect the “distraction” (meta-level monitoring) and then make adjustments to its sensitivity to interruptions (meta-level control). It might reconfigure its priorities so that minor events can be ignored. Meta-level control may also generate learning goals, such as identifying what events are the most time-wasting or what kind of “system 1” reactions should be suppressed.

Other self-regulation scenarios exists. For example, an AI agent that makes decisions in safety-critical scenarios could monitor the integrity of critical software that it is relying on (e.g. providing sensor data) and re-configure or replace faulty components as necessary.

Some “self-adaptive” software architectures [4] have the foundations of self-regulation and could be called “meta-reasoning” if they include explicit reasoning and explanation about problems they have detected and corrections they are making.

In later blog posts, I plan to discuss the role of metacognition in ethical reasoning.

References
  1. Kahneman, D. Thinking Fast and Slow. Farrar, Straus and Giroux (2011)
  2. Kobylińska D. and Kusev P. Flexible Emotion Regulation: How Situational Demands and Individual Differences Influence the Effectiveness of Regulatory Strategies. Frontiers in Psychology Volume 10, Article 72, 2019. https://www.frontiersin.org/articles/10.3389/fpsyg.2019.00072/full
  3. Kennedy, C. M. Computational Modelling of Metacognition in Emotion Regulation. In 8th Workshop on Emotion and Computing at the German Conference on AI (KI-2018), Berlin, Germany, (2018). https://www.cs.bham.ac.uk/~cmk/emotion-regulation-metacognition.pdf
  4. Macías-Escrivá, F., Haber, R., del Toro, R., and Hernandez, V. Self-adaptive systems: A survey of current approaches, research challenges and applications. Expert Systems with Applications, Volume 40, Issue 18, 2013. https://www.sciencedirect.com/science/article/pii/S0957417413005125.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s