Local Evaluation
Server-side local evaluation runs evaluation logic on your server, saving you the overhead incurred by making a network request per user evaluation. The sub-millisecond evaluation is perfect for latency-minded systems which need to be performant at scale.
Exposure Tracking
Local evaluation doesn't automatically set experiment user properties. If you use local evaluation and you want to run experiments where success metrics are analyzed, you will need to implement exposure tracking (generally done on the client-side).
To more easily track exposures on the client-side, bootstrap the client-side SDK with the variants evaluated server-side and utilize automatic exposure tracking using one of the analytics SDK integrations.
Targeting Capabilities¶
Because local evaluation happens outside of Amplitude, advanced targeting and identity resolution powered by Amplitude Analytics isn't supported. That said, local evaluation allows you to perform consistent bucketing with target segments, which is often sufficient.
Feature |
Remote Evaluation | Local Evaluation |
---|---|---|
Consistent bucketing | ||
Individual inclusions | ||
Targeting segments | ||
Amplitude ID resolution | ||
User enrichment | ||
Sticky bucketing |
Implementation¶
Local evaluation is just evaluation--a function which takes a user and a flag as input, and outputs a variant.
The only non-local part of local evaluation is getting flag configurations from Amplitude Experiment, but this can happen at an interval, and flags can be cached in-memory on the server-side for zero latency access.
Edge Evaluation
The local evaluation Node.js SDK can be run in edge worker/functions which support JavaScript and a distributed store. Contact your representative or email experiment@amplitude.com to learn more.
SDKs¶
Local evaluation is only supported by server-side SDKs which have local evaluation implemented.
SDK | Remote Evaluation | Local Evaluation |
---|---|---|
Node.js | ||
Ruby | ||
JVM | ||
Go | ||
Python |
Performance¶
The following results are for a single flag evaluation, and were collected over 10 executions of 10,000 iterations of evaluation with randomized user inputs evaluated for 1 flag configuration, selected at random out of 3 possible flag configurations.
SDK | Average | Median | Cold Start |
---|---|---|---|
Node.js | 0.025ms | 0.018ms | 3ms |
Go | 0.098ms | 0.071ms | 0.7ms |
JVM | 0.007ms | 0.005ms | 6ms |
Still have questions? Ask them in the Community.