Human-centered AI Systems

When considering ethical AI,  I find it useful to distinguish between two types of AI systems, each of which have different ethical and dependability considerations.

Type 1: “Nature inspired” AI systems: their purpose is to help understand and replicate key features of natural intelligence. They may be simulations or experimental hardware robots. Examples include neuroscience models, swarm intelligence, artificial societies and cognitive architectures. They are not deployed in environments where humans depend on them. Although these models would mostly be developed for research purposes, it is conceivable that they might be applied for a purpose (e.g. games, virtual characters, exploring Mars).

Type 2: “Human-centred” AI systems: they are developed for a particular purpose within a human environment (e.g. an organisation or a home).  Examples include medical decision support systems and home robots. These systems need to be dependable, particularly if they involve autonomous actions. They may contain biologically inspired models originally developed as Type 1 research models (e.g. cognitive architectures, neural network learning), but only if such models can be applied effectively to satisfy the system requirements.

Importance of requirements
The two categories above are approximate and not mutually exclusive. For example, a Type 1 system may involve hardware robots with experimental human interaction (e.g. humans teaching a robot the names of objects and how to manipulate them). In such cases, safety may become important, but to a lesser extent than for most Type 2 systems. Such a robot could misidentify objects or misinterpret human communication without serious consequences. The main difference between the two classes is that the requirements of a Type 2 system are specified by humans who depend on it, while the main requirement for a Type 1 system is to survive in a challenging environment or to be successful at solving problems that biological systems can solve. Both these requirements can be combined in Type 2 systems (but the human-specified requirements would take precedence).

Knowledge and communication
For Type 1 systems, any knowledge that the system acquires would be relevant for its goals but not necessarily relevant for humans.  For example, an agent-based model might be used to study the evolution of communication in a simulated society. The evolving language constructs would be used to label entities in the simulated world which are relevant for the agents, but the labels (and possibly the labelled entities) might not be meaningful for humans. Similarly, applied systems such as a Mars explorer may develop concepts which humans don’t use and need not be explained (unless they become relevant for human exploration). These AI systems would acquire their knowledge autonomously by interacting with their environment (although some innate “scaffolding” may be needed).

For Type 2, the AI system’s knowledge needs to connect with human concepts and values. Knowledge acquisition using machine learning is a possibility, but there are debates on its limitations. A good discussion is in “Rebooting AI”  [Marcus and Davis 2019]. Even if machine learning of human values were possible technically, it might not be desirable. See for example, the discussions in [Dignum 2019] where “bottom-up” learning of values is contrasted with “top-down” specification (hybrids are possible).  I think there is an important role for human participation and expertise in the knowledge acquisition process, and it makes sense for some knowledge to be hand-crafted (in addition to learning). In future posts, I plan to explore what this means for system design in practice.

    References:
  • [Dignum 2019] Dignum, V. (2019) Responsible Artificial Intelligence. Springer.
  • [Marcus and Davis 2019] Marcus, G and Davis, E.  (2019).  Rebooting AI: Building Artificial Intelligence We Can Trust. Pantheon Books, USA.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s