Skip to content

google-deepmind/concordia

Concordia

A library for generative social simulation

Python PyPI version PyPI tests Tests Examples

Concordia Tech Report | Concordia Design Pattern | Code Cheat Sheet

About

Concordia is a library for constructing and running generative agent-based models that simulate interactions among entities in grounded physical, social, or digital environments. It uses an interaction pattern inspired by tabletop role-playing games: a special entity called the Game Master (GM) simulates the environment in which player entities interact. Entities describe their intended actions in natural language, and the GM translates these into appropriate outcomes e.g. checking physical plausibility in simulated worlds.

Concordia supports a broad range of applications, including social science research, AI safety and ethics, cognitive neuroscience, economics, synthetic data generation for personalization, and performance evaluation of real services through simulated usage.

Concordia requires access to a standard LLM API and may optionally integrate with external applications and services.

How it Works

Concordia operates as a game engine for generative agents, built around three core concepts:

  • Entities: The actors in the simulation—either player characters (Agents) or system controllers (Game Masters).
  • Components: Modular building blocks of an Entity. Entity behaviors e.g. logic, chains of thought, memory operations, etc are all implemented within components. Concordia comes with a core library of components and user-created components are also included in the main library under the contrib directory. It's easy to create your own components and add them to the library.
  • Engine: The simulation loop. It solicits actions from entities and delegates resolution to the Game Master.

This modular architecture enables complex behaviors to be assembled from simple, reusable parts.

Folder Structure

Tip

The best way to learn is to watch the Concordia: Building Generative Agent-Based Models tutorial on YouTube, run the examples/tutorial.ipynb and then try modifying the Prefabs to see how agent behavior changes.

Installation

Concordia is available on PyPI and can be installed using:

pip install gdm-concordia

After doing this you can then import concordia in your own code.

Development

Codespace

The easiest way to work on the Concordia source code is to use our pre-configured development environment via a GitHub Codespace.

This provides a tested, reproducible development workflow that minimizes dependency management. We strongly recommend preparing all pull requests for Concordia via this workflow.

Manual setup

If you want to work on the Concordia source code within your own development environment you will have to handle installation and dependency management yourself.

For example, you can perform an editable installation as follows:

  1. Clone Concordia:

    git clone -b main https://github.com/google-deepmind/concordia
    cd concordia
  2. Create and activate a virtual environment:

    python -m venv venv
    source venv/bin/activate
  3. Install Concordia:

    pip install --editable .[dev]
  4. Test the installation:

    pytest --pyargs concordia
  5. Install any additional language model dependencies you will need, e.g.:

    pip install .[google]
    pip install --requirement=examples/requirements.in

    Note that at this stage you may find that your development environment is not supported by some underlying dependencies and you will need to do some dependency management.

Bring your own LLM

Concordia requires access to an LLM API. Any LLM API that supports sampling text should work, though result quality depends on the capabilities of the chosen model. You must also provide a text embedder for associative memory. Any fixed-dimensional embedding works for this, ideally one suited to sentence similarity or semantic search.

Example usage

Find below an illustrative social simulation where 4 friends are stuck in a snowed in pub. Two of them have a dispute over a crashed car.

The agents are built using a simple reasoning inspired by March and Olsen (2011) who posit that humans generally act as though they choose their actions by answering three key questions:

  1. What kind of situation is this?
  2. What kind of person am I?
  3. What does a person such as I do in a situation such as this?

The agents used in the following example implement exactly these questions:

Open In Colab

Citing Concordia

If you use Concordia in your work, please cite the accompanying article:

@article{vezhnevets2023generative,
  title={Generative agent-based modeling with actions grounded in physical,
  social, or digital space using Concordia},
  author={Vezhnevets, Alexander Sasha and Agapiou, John P and Aharon, Avia and
  Ziv, Ron and Matyas, Jayd and Du{\'e}{\~n}ez-Guzm{\'a}n, Edgar A and
  Cunningham, William A and Osindero, Simon and Karmon, Danny and
  Leibo, Joel Z},
  journal={arXiv preprint arXiv:2312.03664},
  year={2023}
}

Disclaimer

This is not an officially supported Google product.