Agent-Based Models

Agent-based models include:

Agents

Reflex agents choose actions based on the current percept (and maybe memory). They are concerned almost exclusively with the current state of the world - they do not consider the future consequences of their actions, and they don't have a goal that their are working towards. Rather, they just operate off of simple "reflexes".

Agents that plan consider long(er) term consequences of their actions, have a model of how the world changes based on their actions, and work towards a particular goal (or goals), and can find an optimal solution (plan) for achieving its goal or goals.

Brownian agents

A Brownian agent is described by a set of state variables $u_i^{(k)}$ where $i \in [1, \dots, N]$ refers to the individual agent $i$ and $k$ indicates the different variables.

These state variables may be external, which are observable from outside the agent, or internal degrees of freedom that must be inferred from observable actions.

The state variables can change over time due to the environment or internal dynamics. We can generally express the dynamics of the state variables as follows:

$$ \frac{d u_i^{(k)}}{dt} = f_i^{(k)} + \mathcal F_i^{\text{stoch}} $$

The principle of causality is represented here: any effect such as a temporal change of variable $u$ has some causes on the right-hand side of the equation; such causes are described as a superposition of deterministic and stochastic influences imposed on the agent $i$.

In this formulation, $f_i^{(k)}$ is a deterministic term representing influences that can be specified on the time and length scale of the agent, whereas $F_i^{\text{stoch}}$ is a stochastic term which represents influences that exist, but not observable on the time and length scale of the agent.

The deterministic term $f_i^{(k)}$ captures all specified influences that cause changes to the state variable $u_i^{(k)}$, including interactions with other agents $j \in N$, so it could be a function of the state variables of other agents in addition to external conditions.

Multi-task and multi-scale problems

A multi-task domain is an environment where an agent performs two or more separate tasks.

A multi-scale domain is a multi-task domain that satisfies the following:

More generally, multi-scale problems involve working at many different levels of detail.

For example, an AI for an RTS game must manage many simultaneous goals at the micro and macro level, and these goals and their tasks are often interwoven, and all this must be done in real-time.

Utilities

We encode preferences for an agent, e.g. $A \succ B$ means the agent prefers $A$ over $B$ (on the other hand, $A \sim B$ means the agent is indifferent about either).

A lottery represents these preferences under uncertainty, e.g. $[p, A; 1-p B]$.

Rational preferences must obey the axioms of rationality:

When preferences are rational, they imply behavior that maximizes expected utility, which implies we can come up with a utility function to represent these preferences.

That is, there exists a real-valued function $U$ such that:

$$ \begin{aligned} U(A) \geq U(B) &\Leftrightarrow A \succeq B \\ U([p1, S_1; \dots ; p_n, S_n]) &= \sum_i p_i U(S_i) \end{aligned} $$

The second equation says that the utility of a lottery is the expected value of that lottery.

References