Some of my research involves the computational modelling of cognition and how it interacts with emotion. Computational modelling is useful for the study of human or animal cognition, as well as for the building of artificial cognitive systems (e.g. robots). The cognitive process being modelled may be understood as an autonomous system which senses information from its environment and uses this information to determine its next action. Such an autonomous system is often called an “agent” [Russel and Norvig 2010]. A cognitive architecture is a specification of the internal structure of a cognitive agent, defining the components of cognition and their interactions. The concept of “architecture” is important because it integrates the various functions of cognition into a coherent system. Such integration is necessary for building complete autonomous agents and for the study of interactions between different components of natural cognition, such as reasoning and motivation.
Multiple Levels
Architectures can be defined at different levels of detail. For example, [Marr 1982] defines three levels which can be applied to cognitive architecture as follows:
Level 1: “Computational theory”: this specifies the functions of cognition – what components are involved and what are their inputs and outputs?
Level 2:. “Representation and algorithm”: this specifies how each component accepts its input and generate its output. For example, representations may include symbolic logic or neural nets; algorithms may include inference algorithms (for logical deductions) or learning algorithms.
Level 3: “Implementation”: this specifies the hardware, along with any supporting software and configurations (e.g. simulation software, physical robot or IT infrastructure).
At level 1, the architecture specifies the components and their interfaces. For example, a perception component takes raw sense data as input and identifies objects in a scene; a decision component generates an action depending on objects identified by the perception component. Level 2 fills in the detail of how these components work. For example, the perception component might generate a logic-based representation of the raw data that it has sensed, while the decision component uses logic-based planning to generate actions. Level 3 provides an executable instantiation of the architecture. An instantiation may be a physical robot, a software product or prototype, or a model of an agent/robot which can be run as a simulation on a particular platform.
Environment and requirements
When designing an architecture, the environment of the agent needs to be considered. This defines the situations and events that the agent encounters. It is also important to define requirements that the agent must satisfy in a given environment. These may be capabilities (e.g. to detect the novelty of an unforeseen situation or to act on behalf of human values sufficiently accurately so that humans can delegate tasks to the agent in the given environment). If a natural system is being modelled (e.g. an animal), the requirements may simply be survival in the given environment. Assumptions made about the environment help to constrain the requirements.
Architecture examples
Example architectures that are particularly relevant to my research include H-CogAff [Sloman et al. 2005] and MAMID [Hudlicka 2007]. Both are modelling human cognition. H-CogAff emphasises the difference between fast instinctive reactions and slower reasoning. MAMID focuses on emotion generation and its effects on cognition. Architectures need not necessarily be executable (i.e. defined at levels 1 to 3). For example, H-CogAff is not a complete architecture that can be translated into an executable instance, but it is a useful guideline.
Broad-and-shallow architectures
Executable architectures can be developed using iterative stepwise refinement, beginning with simple components and gradually increasing their complexity. The complexity of the environment can also be gradually increased. To experiment with ideas quickly, it is important to use a rapid-prototyping methodology. This allows possibilities to be explored and unforeseen difficulties to be discovered early. To enable rapid-prototyping, an architecture should be made executable as early as possible in the development process. A useful approach is to start with a “broad and shallow” architecture [Bates et al. 1991]. This kind of architecture is mostly defined at level 1, with artificially simplified levels 2 and 3. For example, at level 2, the perception component may be populated temporarily by a simple data query method (does this object exist in the data?) and the decision component might include simplified “if-then” rules. For level 3, a simulation platform may be used which is suitable for rapid-prototyping.
In later posts, I will discuss how this methodology fits in with AI research more generally and ethical AI systems in particular.
References:
- [Russell and Norvig 2010] Russell, S. J., Norvig, P., & Davis, E. (2010). Artificial intelligence: A modern approach.
- [Marr 1982] Marr, D. (1982), Vision: A Computational Approach, San Francisco, Freeman & Co. Full text: http://s-f-walker.org.uk/pubsebooks/epubs/Marr]_Vision_A_Computational_Investigation.pdf
- [Bates et al. 1991] Bates, J., Loyall, A. B., & Reilly, W. S. (1991). Broad agents. Proceedings AAAI Spring Symposium on Integrated Intelligent Architectures. Stanford, CA: Reprinted in Sigart Bulletin, 2(4), Aug. 1991, pp. 38-40.)
- [Sloman et al. 2005] Sloman, A., Chrisley, R., Scheutz, M. (2005). The Architectural Basis of Affective States and Processes. In: Fellous, J.-M., Arbib, M.A. (eds.) Who Needs Emotions? New York: Oxford University Press.
Full text: http://www.sdela.dds.nl/entityresearch/sloman-chrisley-scheutz-emotions.pdf - [Hudlicka 2007] Hudlicka, E.(2007): Reasons for emotions: Modeling emotions in integrated cognitive systems. In W. Gray (Ed.), Integrated Models of Cognitive Systems, 137. New York:Oxford University Press.