Tag Archives: cognition

What is a Cognitive Architecture?

Some of my research involves the computational modelling of cognition and how it interacts with emotion. Computational modelling is useful for the study of human or animal cognition, as well as for the building of artificial cognitive systems (e.g. robots).  The cognitive process being modelled may be understood as an autonomous system which senses information from its environment and uses this information to determine its next action. Such an autonomous system is often called an “agent” [Russel and Norvig 2010].  A cognitive architecture is a specification of the internal structure of a cognitive agent, defining the components of cognition and their interactions. The concept of “architecture” is important because it integrates the various functions of cognition into a coherent system. Such integration is necessary for building complete autonomous agents and for the study of interactions between different components of natural cognition, such as reasoning and motivation.

Multiple Levels
Architectures can be defined at different levels of detail. For example, [Marr 1982] defines three levels which can be applied to cognitive architecture as follows:
Level 1: “Computational theory”: this specifies the functions of cognition – what components are involved and what are their inputs and outputs?
Level 2:. “Representation and algorithm”: this specifies how each component accepts its input and generate its output. For example, representations may include symbolic logic or neural nets; algorithms may include inference algorithms (for logical deductions) or learning algorithms.
Level 3: “Implementation”: this specifies the hardware, along with any supporting software and configurations (e.g. simulation software, physical robot or IT infrastructure).

At level 1, the architecture specifies the components and their interfaces. For example, a perception component takes raw sense data as input and identifies objects in a scene; a decision component generates an action depending on objects identified by the perception component. Level 2 fills in the detail of how these components work. For example, the perception component might generate a logic-based representation of the raw data that it has sensed, while the decision component uses logic-based planning to generate actions. Level 3 provides an executable instantiation of the architecture. An instantiation may be a physical robot, a software product or prototype, or a model of an agent/robot which can be run as a simulation on a particular platform.

Environment and requirements
When designing an architecture, the environment of the agent needs to be considered. This defines the situations and events that the agent encounters. It is also important to define requirements that the agent must satisfy in a given environment. These may be capabilities (e.g. to detect the novelty of an unforeseen situation or to act on behalf of human values sufficiently accurately so that humans can delegate tasks to the agent in the given environment). If a natural system is being modelled (e.g. an animal), the requirements may simply be survival in the given environment. Assumptions made about the environment help to constrain the requirements.

Architecture examples
Example architectures that are particularly relevant to my research include H-CogAff [Sloman et al. 2005] and MAMID [Hudlicka 2007]. Both are modelling human cognition. H-CogAff emphasises the difference between fast instinctive reactions and slower reasoning. MAMID focuses on emotion generation and its effects on cognition.  Architectures need not necessarily be executable (i.e. defined at levels 1 to 3). For example, H-CogAff is not a complete architecture that can be translated into an executable instance, but it is a useful guideline.

Broad-and-shallow architectures
Executable architectures can be developed using iterative stepwise refinement, beginning with simple components and gradually increasing their complexity. The complexity of the environment can also be gradually increased. To experiment with ideas quickly, it is important to use a rapid-prototyping methodology. This allows possibilities to be explored and unforeseen difficulties to be discovered early. To enable rapid-prototyping, an architecture should be made executable as early as possible in the development process. A useful approach is to start with a “broad and shallow” architecture [Bates et al. 1991].  This kind of architecture is mostly defined at level 1, with artificially simplified levels 2 and 3. For example, at level 2, the perception component may be populated temporarily by a simple data query method (does this object exist in the data?) and the decision component might include simplified “if-then” rules. For level 3, a simulation platform may be used which is suitable for rapid-prototyping.

In later posts, I will discuss how this methodology fits in with AI research more generally and ethical AI systems in particular.

References:

  • [Russell and Norvig 2010] Russell, S. J., Norvig, P., & Davis, E. (2010). Artificial intelligence: A modern approach.
  • [Marr 1982] Marr, D. (1982), Vision: A Computational Approach, San Francisco, Freeman & Co. Full text: http://s-f-walker.org.uk/pubsebooks/epubs/Marr]_Vision_A_Computational_Investigation.pdf
  • [Bates et al. 1991] Bates, J., Loyall, A. B., & Reilly, W. S. (1991). Broad agents. Proceedings AAAI Spring Symposium on Integrated Intelligent Architectures. Stanford, CA: Reprinted in Sigart Bulletin, 2(4), Aug. 1991, pp. 38-40.)
  • [Sloman et al. 2005] Sloman, A., Chrisley, R., Scheutz, M. (2005). The Architectural Basis of Affective States and Processes. In: Fellous, J.-M., Arbib, M.A. (eds.) Who Needs Emotions? New York: Oxford University Press.
    Full text: http://www.sdela.dds.nl/entityresearch/sloman-chrisley-scheutz-emotions.pdf
  • [Hudlicka 2007] Hudlicka, E.(2007): Reasons for emotions: Modeling emotions in integrated cognitive systems. In W. Gray (Ed.), Integrated Models of Cognitive Systems, 137. New York:Oxford University Press.

Representing user perspectives in IT systems

Taking into account user perspectives when designing technology means that the technology should fit around their concerns and perceptions. This process is generally called “user-centered design” (see for example: http://en.wikipedia.org/wiki/User-centered_design). Why is this important? It can lead to more simplicity, fewer errors and more satisfied users. There are other advantages that go deeper – such as gaining new insights into what users really require.

The key components of a perspective can be described as follows:

Concepts: how do we describe and visualise the world?  For example, when we prepare documents online, we think of objects such as text and diagrams. These are the concepts. They also include applicable operations (create, edit, save etc.) and the workflow of producing a document. Similarly, when we look up an online map, the concepts include streets, buildings, green spaces etc. Going shopping among physical shops also involves a workflow with applicable operations.

Concerns: what kind of things are important? (including goals, values etc). For example, academics are often concerned about collaborative documents being easy to produce and manage, as well as meeting paper submission deadlines. A user with low literacy may be concerned about completing an online form correctly without assistance. Differing concerns cause key concepts also to differ. For people with mobility restriction, concepts such as “easy walking area” or “slow traffic” on a map may be key, while drivers with busy schedules might look for “fast traffic”.

Representing user perspectives in IT goes way beyond usability or UX (although that is important). In my view, it should satisfy the following requirements:

1. It should be about the actual actions of the IT infrastructure the user is depending on to satisfy their goals. To what extent are the user concerns actually guiding the infrastructure resource allocation and priorities? This is particularly an issue with healthcare systems and privacy.

2. The infrastructure that the user depends on should be transparent and accountable. The visibility should be adjustable depending on the perspective of the particular user.

3. It’s not just about end-users or customers, but also about staff roles within an organisation. For example, a system administrator will have different concepts and concerns from those of a software developer (although both roles now increasingly interact together in the field of “DevOps”). So “the things that matter” then include: “does this help me to do my job effectively? does it cause more complexity and stress? does it reduce errors or create more potential for errors? does it support creativity?”

4. It’s not just about design; it’s also about models based on a user’s perspective. Models allow automated decision-making and prediction. Examples include statistical predictions based on user preferences. These could be said to represent some aspects of a user’s perspective. But I am thinking here particularly about qualitative models. These are models that represent the user’s concepts and concerns in a symbolic language. The language must be human-readable and machine-readable. These topics take us into the field of knowledge-based systems, which I will talk about in a later post.

Smart Assistance for Lifestyle Decisions

Smart assistance technology helps users to achieve goals using informative messages such as recommendations or reminders. To be “smart” the messages need to be sensitive to the user’s changing context, including their current knowledge and perceptions, their mood and environment.

Right now, I’m starting to explore the feasibility of a “smart assistance” app to support users with everyday decisions, for example with food, shopping, finance or time-management. When stressed and overloaded with information, we tend to be influenced easily towards decisions we would disagree with if we had the right information and time to think. Although goals may be set, they are pushed aside by other pressures that are more short-term. Poor decisions cause poor health (and also poor mental health and other negative outcomes).

The project is funded by an UnLtd small grant through the University of Manchester Social Enterprise innovation initiative:
http://umip.com/umip-awards-social-enterprise-innovation-create-buzz-manchester/.

Why is this new?

There are lots of apps out there (see for example http://www.businessinsider.com/functionality-is-key-to-the-future-of-health-apps-2014-6), but most are targeted towards individuals as “consumers”, and do not address the more challenging psychological and social problems. Usually they are just addressing a single issue in isolation, such as exercise. Although Google and Apple have started to develop “health platforms” (http://www.businessinsider.com/google-fit-apple-healthkit-2014-6) they are mostly concerned with integrated sensors and tracking of physical health, not with psychological health.

Some initial requirements

These are the broad requirements that I am starting with:

  1. Citizen led: the users should be in control of both the technology and the process by which they improve their decisions and action.
  2. Promoting reflection and reasoning: reflection and reasoning are high-level cognitive processes that can put the user in control because they involve awareness, deliberate goal setting and planning. This contrasts with behavioural “nudging” which affects decisions unconsciously and have been criticised as unethical (http://www.newscientist.com/article/mg21228376.500-nudge-policies-are-another-name-for-coercion.html#.U-ZXsvldVmU). For this reason I think the app should be called a “cognitive assistance” app.
  3. Promoting social support: Decisions are influenced by the wider social context. Therefore the app should not just be helping individuals in isolation, but must take account of their social environment and promote social support.

Some technical and research challenges

One way to help people resist pressures to make bad decisions is to raise awareness of significant and relevant information (such as why the original goals were set in the first place) and to encourage reasoning. In particular, this might include the following:

  • Smart prompts and reminders – sensitive to the person’s mood and circumstances. Users could also be prompted to send encouragement to others when they may need it.
  • Visualisations to draw attention to the important things, when a user is overloaded with information and options.
  • A kind of dialogue system to help with reasoning (not necessarily using text)

Getting these right is extremely challenging. In particular, the app must be sensitive towards the user’s state of mind and changing context. Smart sensor technology can help with this. (See for example, the MindTech project at http://www.mindtech.org.uk/technologies-overview.html). The challenge that particularly interests me is the modelling of the user’s mental attitudes so that the reminders are not disruptive or insensitive. To support reasoning, some principles of online Cognitive Behavioural Therapy (http://www.nice.org.uk/Guidance/ta97) may be applied. In future posts I will discuss the role of theories and models, as well as the participatory software design process.