Skip to content

Data Model

At the top level in Amplitude is your organization. Within an organization, Amplitude Experiment follows the project structure defined by Amplitude Analytics. In short, all Experiment data must be associated with an Amplitude Analytics project.

Flags, experiments, and deployments all live within an Amplitude project.

Data model diagram


Experiment uses the same projects which are required for Amplitude Analytics. Generally speaking you want to create a project per product and per environment. Since flags, experiments, and deployments only exist within a single project, you will want to duplicate these objects across projects for the same product.

Copying a flag to another project

When developing a new feature with an experiment, you can create the experiment in the dev environment project to develop and test that the implementation is correct, then copy the experiment into the prod project to run the experiment in prod.


In Amplitude Experiment, a deployment serves a group of flags or experiments for use in an application. Deployments have an associated randomly generated deployment key (a.k.a API key) which is used to uniquely identify the deployment and authorize requests to Amplitude Experiment's evaluation servers.

Client vs Server Deployments

Deployments are either client or server deployments. Only server deployments can utilize access flag configs for local evaluation, and should not be shared to made public in any way.

Deployments live within Amplitude Analytics projects; a project may have multiple deployments. Deployments are added to Flags and Experiments which exist within the same project. When a request to fetch variants for a user is received by Experiment's evaluation servers, the deployment key is used to look up all associated flags and experiments for evaluation.

Flags and experiments

Feature flag and experiments are used to serve a variable experience to a user. They're identified by the flag key, associated with 0-n deployments, and contains 1-k variants. Additionally, the evaluation mode (local or remote) determines whether or not the flag or experiment can be locally evaluated and may limit the targeting capabilities for the flag if set to local (local evaluation mode flags cannot utilize advanced targeting features like behavioral cohorts).


Feature flags and experiments share the same underlying data model, and can be migrated from one to the other retroactively. The most visible difference comes in the product interface: experiments guide you through an experiment lifecycle and give you the ability to define success metrics and perform analysis; whereas flags are more bare-bones, and don't include special planning and analysis sections.


Used for standard feature flagging without user analysis. When created, comes with a default variant, on.

Flag Use Cases

  • Rolling out a feature to a subset of users (e.g. beta customers).
  • Different experience for a behavioral cohort (e.g. power users).


Used for feature experimentation on users. When created, comes with two default variants, control and treatment.

Experiment Use Cases

  • Run an A/B test for a new feature in your application.
  • Experiment on multiple recommendation algorithms on your server.


A variant exists within a flag or an experiment, and represents a variable experience for a user.

Requirement Description
Value Required A string which identifies the variant in the instrumentation. The value string is checked for equality when a variant is accessed from the SDK or Evaluation REST API. Formatting is limited to all lower case kebab-case or snake_case.
Payload Optional Dynamic JSON payload for sending arbitrary data down with the variant. For example, you could send down a hex code to to change the color of a component in your application.
Name Optional Additional name on top of the Value without formatting limitations. Also useful to re-name the variant without potentially breaking the instrumentation in your code base.
Description Optional A more detailed description of the variant. Can be used to describe what the user experiences when viewing the variable experience in more detail.

SDK Usage

Only the Value and Payload are available when accessing a variant from an SDK or the Evaluation REST API.


Experiment users map neatly to a user within Amplitude Analytics. Alongside flag configurations, users are used as input to evaluation. The properties on the user can be used in flag and experiment targeting rules.

Within Amplitude Experiment, users are passed to evaluation via fetch requests in remote evaluation, or directly to the evaluate function for local evaluation.


Either a user ID or device ID must be included in the user object for evaluation to succeed. E.g. remote evaluation will return a 400 error if both the User ID and Device ID are null, empty, or missing.

Type Description
user_id string The User ID is the primary identifier for the user, generally when the user is logged into an account within your system. The User ID is used when resolving the Amplitude ID on enrichment prior to remote evaluation where the Amplitude ID is used the default bucketing key.
device_id string The Device ID is the secondary identifier for the user, generally randomly generated by an analytics SDK on the client side or on the server-side and set in a cookie. The Device ID is also used when resolving the Amplitude ID on enrichment prior to remote evaluation where the Amplitude ID is used the default bucketing key.
user_properties object Optional object of additional custom properties to be taken into consideration when evaluating the user during local or remote evaluation.
Full User Definition
    "user_id": string,
    "device_id": string,
    "country": string,
    "region": string,
    "city": string,
    "dma": string,
    "language": string,
    "platform": string,
    "version": string,
    "os": string,
    "device_manufacturer": string,
    "device_brand": string,
    "device_model": string,
    "carrier": string,
    "library": string,
    "user_properties": object

Last update: 2022-04-26