Modeling Complex Systems for Public Policies. Edited by Bernardo Alves Furtado, Patrícia A. M. Sakowski, Marina H. Tóvolli. 2015.
Two main phenomena of complex systems are:
Leverage points are points within the complex system that can, when changed, affect the system itself.
Tipping points (also called phase transitions or bifurcations) are points where the system suddenly changes drastically due to a comparatively small adjustment.
Complex systems may exhibit path dependence, where (distantly) past events affect the possibilities of present state. Sensitivity to initial conditions is a characteristic form of path dependence in complex systems, where slight changes in the system's starting point can lead to drastically different outcomes.
Complex systems are nonlinear; that is, the outputs are not linearly related to the inputs. The aforementioned properties describe this nonlinearity - changes in the input are not proportional to changes in the output.
The robustness of a complex system is its ability to withstand wholesale removal of or large changes to its components without significant differences in its outcomes. Complex systems may also adapt and evolve.
An important critique of modeling is the Lucas critique of policy (1976), which states that models (in terms of understanding policy) are focused on macro-level behaviors, but because low-level individual behaviors will shift in response to policy, the predictions such models make will be wrong. Complex systems modeling, because it focuses on the low-level individual, are more resistant to this critique.
Complex systems modeling also works well with diversity and heterogeneity, which are often left out of models because they complicate things, even though they may be crucial to a good model. They also model networks well, and the interconnectedness and interactions that come with them.
A common tool for modeling complex systems is agent-based modeling (ABM). The advantage of ABMs is that they are relatively easy to describe - they aren't as abstract as purely mathematical models.
The power of complex systems modeling comes at a cost - they may be computationally expensive and contain large numbers of free parameters. They also require theoretical expertise around what is modeled.
Given the complexity of even the smallest of social systems, [the analysis of policy outcomes] is not trivial. Social systems are comprised of autonomous people who do not behave in perfectly rational ways, and they have different explanatory mental models for how society works. Social systems do not behave in deterministic ways that lend themselves to a simple spreadsheet analysis or a closed form mathematical formulation at the causal level. The behavior of social systems cannot be neatly constructed, as a watchmaker would build a watch to keep time. (p. 73)
The relationship between a cause and its effect can be understood through models. At its most basic form, a model can simply be a mental concept, a description of a belief for how a system will respond to change. (p. 73)
...one might be tempted to argue that "big data" are the solution. One could simply analyze enough data from a system to understand all of its potential dynamics to include outliers. However, from a policy perspective this analytic approach is of limited utility. What a big data analysis can provide is the correlative structures present within a dataset. This is quite different than the causal structure. Moreover, policy analysis is typically undertaken to inform a desired change to the system. This being the case, the potential new system would be "out of sample" from the big data analysis and how the old and new systems relate may not be clear.
ABMs, on the other hand, allow one to investigate potential generating mechanisms and experiment with causal structures. As Epstein has termed it: "If we did not generate
$x$, we did not explain $x$" (Epstein, 2006). As pointed out by Axtell, growing a particular outcome only demonstrates sufficiency (Axtell, 2000). One can demonstrate what will cause an outcome but, likely, will not be able to prove that is the actual mechanism being used by the system under study. (pp. 76-77)
With ABMs, the model must be run many times to explore the mapping between the inputs and outputs. Outputs can be broadly categorized as so:
Axtell's Levels of Epmirical Relevance (2005) give a rough categorization of ABMs' specificity.
Levels 0 and 1 are more appropriate for "thought experiments and initial investigations"; for more serious applications, levels 2 and 3 should be used.
Axtell also describes three levels of correspondence between the ABM and the referent system:
The process of developing such simulations typically involves working with domain experts, implementing a model, verifying the model's implementation (e.g. via unit testing), and then validating the model's outputs against external data sources (if available). It is extremely important to carefully document the modeling process and its implementation so that it may be reproduced. This is especially true because these models can get very complicated very quickly.
Multi-agent systems (formerly called "distributed artificial intelligence", or DAI), which form the basis of the multi-agent-based simulation (MABS) simulation technique.
Another property of complex systems it that they are open - new individuals or entities may come and go from the system. There is also the property of 2nd order emergence, in which the global patterns that emerge from individual behavior may persist after their originating individuals have left the system, and remaining individuals perceive and respond to those effects (perhaps in different ways), generating more emergent effects.
Multi-agent systems are essentially societies of autonomous artificial agents. These agents may have varying complexity. The simplest are reactive agents, and more complex ones are deliberative agents.
These systems can be quite difficult to develop - more of an art than a science. The typical process involves real-world data collection, a development of an initial simulation model and parameters/exogenous factors (which ideally are based on real values, but such values may not be available), verification (e.g. unit testing) that the implementation works as expected, running of the simulation, and validation of the simulation's results with other data collected (various statistical methods, such as R2 and mean absolute error, can be used for validation). Determining what data is relevant to the agents' internal decision making process and how that works is tricky and nuanced. Validation may also be difficult, because of path dependence, or because of stochastic elements of the simulation, and so on.
After validation, sensitivity analysis is necessary to determine how sensitive the simulation is to the initial assumptions that were made. This process involves slightly changing the initial conditions and parameters and rerunning the simulation. The easiest way to do this is to vary these parameters randomly so that a distribution over outcomes is generated.
A model is an abstract, and to some extent idealised, description of reality that still captures a specific phenomenon. It is therefore limited by construction. This is true in particular for the complex systems approach to social systems. Models in this realm are not intended to reproduce society as a whole, but to shed light on mechanisms behind social phenomena. (pp. 141-142)
Social systems in particular are characterized by heterogeneity, unlike other systems such as molecular or biological ones. Complex behavior arises out of the interactions between these heterogeneous individuals (as part of networks) as well as interactions with signals which are typically exogenous and environmental (e.g. advertisements, media, etc), though they may be influenced by agent behavior as well.
Social systems are finite, a result of which is demographic noise - an "intrinsic randomness" (p. 143).