Live Demo

A/B Testing Analytics Pipeline

This system runs controlled experiments to test whether a change — like a different button color — actually improves results. It uses statistics to tell you whether the difference is real or just random chance.

The Experiment

What did we test?

We tested two versions of a checkout button on an e-commerce site across 10,000 users over 14 days. Half the users saw the green button, half saw the orange button. We measured which one got more people to complete their purchase.

Control A — Original

Complete Purchase
11.56%
Conversion Rate

Variant B — New Version

Buy Now
14.60%
Conversion Rate
How it works

How do we know it's not just luck?

1

Split users randomly

5,000 users saw the green button. 5,000 saw the orange button. The split was completely random so neither group had an unfair advantage.

2

Run a statistical significance test

We applied a z-test — a mathematical formula that calculates the probability that the difference between the two groups happened by random chance. If that probability is below 5%, the result is considered statistically significant.

3

Make a data-driven decision

The p-value came back at 0.000007 — meaning there is only a 0.0007% chance this result was random. The orange button genuinely converts better. Recommendation: launch it.

Results

What the API returns

GET /significance — Is the result real?

is_significant true
p_value 0.000007
z_statistic 4.508
recommendation Launch Variant B
26%
Conversion Lift
10K
Users Tested
$8,528
Revenue Lift
14
Days Running
Try it Live — Open API View Source Code ← Back to Projects