petithacks - small hacks, faster growth

We optimize online businesses for more

With good design & data driven results is how we do it

Little details such the ones we highlight in petithacks matter. The smaller the change, the easier it is the pinpoint the cause and effect and learn something. We can increase your conversion rates of your online bussines by running A/B tests pretty quickly each month.

Here is our full track record of past results

We've worked on some projects with a median relative increase of 12.43%
Jan 2017 Increase sales by 7% for a jewellery ecommerce
Dec 2016 Increase signups by 4.3% for an online-marketing tool
Nov 2016 Increase sales by 21% for an online webinar add-on
Nov 2016 Increase signups by 4% for an invoicing tool
Oct 2016 Increase signups by 0% for a helpdesk selfhosted solution
Oct 2016 Increase sales by 17.8% for an ebay-like marketplace
Oct 2016 Increase signups by 9% for a local seo reporting service
Sept 2016 Increase signups by 36.4% for an infoproduct landing

What can you expect from us each month

Continuous prioritization of what needs to be tested

We will work with you to determine what can be tested faster, with less effort and has a notorious effect in your final conversion rates.

Focus on 1 testing per month

At least 1 running A/B test per month, and have as less downtime (without tests running) as possible.

Test concepting

We will agree on the primary goal of your test and we will provide you at least 2 variations of the test that we will run each month

Test maintenance and reporting

We will set the test using Optimizely or any tool you might want to work with and we will provide you a bi-weekly report.

Let's lift up your conversions

We measure performance based on a primary goal we establish together. The estimator should give you a quick idea of how we price for a project of average complexity. Let's talk and come up with a plan that works for you.
  • $160 upfront + $400 per each 1% relative increase

You might still have some questions

Do you implement a winning test?

No. After completing a successful test, the responsibility to implement a winner falls on your team. Although we can share the tested code to make your life easier, it is better that your developers highlight any winners, correctly.

How long does a test usually run for?

For sites with lots of traffic, we can consider a minimum of 1 week. however we always suggest to keep a test running at least for 1 month. In order to reach greater statistical validity, we'll calculate the required duration from results based on visits, the number of concepts involved and the existing conversion rate.

Will the results of the test be reliable?

Yes. We justify all of our design decisions as well as basing them on hypotheses. We ensure that all tests run for at least a week or two. Afterwards, we don't just tell you that you have a winner, we first look at things like statistical significance, margin of error, week-to-week consistency and secondary metrics; to arrive at an overall assessment.

What results can you expect?

We are pleased to help and are confident that we can deliver positivity for you: from all of our tests! Although projects and our tests may differ, we achieve an approximate average of 10% relative improvement over previous projects. That’s why we can offer performance based agreements and why the risk is shared. Alternatively, we also try to retest at least once, with a failed test.

Still with questions?