5  Agents

An agent is an entity that acts in an environment. A software agent is an automated system designed to carry out useful tasks autonomously or semi-autonomously.

Key features defining agents often include:

Agents interact with their environment, which typically consists of sensors for perception, effectors for action, and potentially other agents.

5.1 Types of Agents

Agents can be categorized based on their complexity and decision-making process:

  1. Simple Reflex Agents:
    • These agents lack internal state and react directly to current perceptions based on predefined condition-action rules.
    • They are often referred to as Stimulus-Reaction agents.
    • They are simple but can only operate effectively in fully observable environments where the correct action depends only on the current perception.
  2. Simple Reflex Agents with Internal State:
    • These agents extend simple reflex agents by incorporating an internal state that represents some history or simple past experience.
    • They still rely primarily on condition-action rules, but the internal state allows them to handle partially observable environments to a limited extent.
  3. Goal-Driven Agents:
    • These agents have explicit information about their goal(s).
    • Their actions are chosen not just based on the current state, but also on how they contribute to achieving the defined goal.
    • They need to understand when a specific state or sequence of actions constitutes reaching a goal.
  4. Utility-Driven Agents:
    • These agents operate with multiple, potentially conflicting goals.
    • They evaluate the utility or desirability of different states and potential actions.
    • They choose actions that are expected to maximize their overall utility, balancing tradeoffs between goals.

5.2 Rational Agents and Performance Measure

A rational agent is one that acts to achieve the best possible outcome, or when there is uncertainty, the best expected outcome. This is typically measured by maximizing the expected performance measure.

The performance measure is a criterion that evaluates how successful the agent is in achieving its goals over time. An ideal rational agent would choose actions that maximize this measure given its perceptions.

While theoretically an agent’s behavior could be described by a complete lookup table of actions for every possible perception sequence, this is computationally infeasible. Therefore, agents are designed to summarize possible actions in a formal manner without exhaustive listing.

5.3 Advanced Agent Structures

More complex agents utilize internal representations and reasoning:

  • Problem-Solving Agents: These agents decide on a sequence of actions before acting. They typically operate in environments where they can access the environment to explore possibilities (e.g., search algorithms).

  • Knowledge-Based Agents (KB Agents):

    • These agents maintain an explicit representation of the world (a knowledge base).
    • They choose actions based on this knowledge, reasoning about states and the effects of actions.
    • Knowledge is explicit and often understandable by humans.
    • They are particularly suited for complex and inaccessible environments where the agent cannot simply explore randomly.
    • Challenges include representing complex knowledge and dealing with partially observable environments.
  • Planning Agents:

    • A special case of KB Agents that focus on planning.
    • They use explicit knowledge about actions and their effects to generate a full plan (a sequence of actions) to achieve a goal.
    • They assume the environment is initially static, discretizable, and observable, and actions are deterministic, though these assumptions can be relaxed in more advanced planning.
    • Planning requires knowing the current state and having a specified goal.

5.4 Logic and Knowledge Representation

Knowledge-Based Agents start with general knowledge and use logical reasoning to maintain a consistent description of the world based on perceptions and infer actions. Their knowledge is expressed explicitly and declaratively.

A KB agent maintains a Knowledge Base (KB) and interacts with it via a Tell-Ask interface:

  • Tell: Adds new facts or knowledge to the KB (often representing interpreted perceptions).
  • Ask: Queries the KB to retrieve information or infer conclusions.

Representing knowledge requires establishing conventions for describing situations, objects, and events. “Computable” knowledge representation allows systems to interact with and reason about this abstracted world model.

Reasoning can be:

  • Rule-based: Explicit steps that can be described.
  • Associative: Experience-based, less explicitly definable steps.

According to Newell and Simon, the ability of an agent to exhibit intelligent behavior is closely tied to the knowledge it possesses, suggesting KB agents are indeed intelligent.

5.5 Structure of an Intelligent Agent (General Model)

A general model of an intelligent agent often involves interacting with an environment and processing tasks. The agent cycle might look like this:

  1. Recognize the Task: Interpret the situation and identify the problem or goal.
  2. Select Method: Choose an appropriate problem-solving method from a repertoire based on the task and internal state.
  3. Apply Method: Execute the chosen method using the internal representation (if satisfactory).

This cycle allows for adapting both methods and representations over time.

The Knowledge Level provides a high-level perspective for describing an intelligent system’s behavior from an external observer’s viewpoint. It posits that the system acts as if it possesses knowledge and uses this knowledge rationally to achieve goals.

A Knowledge Level description assumes:

  • The agent possesses knowledge.
  • Some knowledge represents the agent’s goals.
  • The agent can perform actions.
  • The agent acts according to the principle of rationality (choosing actions that, based on its knowledge, lead to goal achievement).

5.6 Physical Symbol System Hypothesis

A Physical Symbol System is a system capable of manipulating symbol structures.

The Physical Symbol System Hypothesis states that a physical symbol system possesses the necessary and sufficient means for generalized intelligent action. This hypothesis underlies much of classical AI research, suggesting that intelligence can emerge from the manipulation of symbols according to rules, and that computers, being physical symbol systems, can exhibit intelligence through appropriate programming.

5.7 Learning Agents

A fifth, often transversal, characteristic of agents is the ability to learn.

An agent A is said to learn from experience E with respect to a set of tasks T and a performance measure P if its performance on tasks T, as measured by P, improves with experience E. Learning allows agents to adapt, improve their performance over time, and operate effectively in dynamic or initially unknown environments.