We use cookies to give you the best experience on our website. In addition to strictly neccesary cookies, third party cookies may track your use of Beehive. If you continue without changing your settings, we'll assume that you are happy to receive all cookies. More about cookies

Knowing you're focusing on the right things is hard when developing a product or service - we share a process that makes it easier for us.

Suraj Vadgama, Creator of www.beehivegiving.org & Associate @ CAST

I spend most of my time working on Beehive. And being part of a small team working on a large problem it can be easy to get caught up in day-to-day delivery. Add to this the complexities and often unconventional market dynamics of working on a tech for good project, and it's easy to neglect some of the bigger questions we're trying to answer.

That's very much the situation we found ourselves in this summer, when we were heads down adding premium features to Beehive. We weren't seeing the results we wanted, and our process for wrangling our assumptions into testable hypotheses had broken down.

It's an easy trap to fall into and can be very costly if not addressed quickly. This write up shares three key parts of our revamped process and what we learnt from implementing it.

1. Assumptions

This is the most important part. Every project should have a master list of clearly articulated key Assumptions that relate to the core success criteria for the project. This list should be updated regularly, and ideally prioritised based on factors such as how critical the Assumption is to success.

It can be easy to over theorise this step, but a small list of mission critical questions and assumptions is far better than a long list that isn't related to the core problem or opportunity.

An example from Beehive

Micro/small charities have the capacity to pay for fundraising tools/resources.

If it makes sense you can group or link Assumptions to gain clarity about your domain - but remember these relationships are often assumptions too.

We like to use a simple classification of "What we know", "What we think we know'', and "What we don't know" for each Assumption.

2. Experiments

With the key Assumptions you want answered in hand you can start to develop testable hypotheses that relate to them.

A simple spreadsheet is a good place to start, and will give you a high level view to schedule work from.

Each row represents a Hypothesis (an objective statement about an Assumption which can be either true or false), and has the following fields:

Status

This field helps with prioritisation and has the following values: Testing, Fail, or Pass. A blank cell indicates that testing is yet to start.

Goal

The Assumption (from above), question and/or business objectives the Hypothesis relates to.

Micro/small charities have the capacity to pay for fundraising tools/resources.

Hypothesis

An objective statement about an Assumption which can be either true or false.

Reducing the price of Beehive Pro - Micro to £X will make more Micro charities pay.

Metric

The measurements expected to change during the testing of the hypothesis.

Number of paying Micro charities.

Experiment

The plan for testing the hypothesis.

Reduce Beehive Pro Micro to £X.

It's important to note that an experiment doesn't need to be a big thing, in the beginning it might be as simple as speaking to people. And even later on in a project you still want to do the least possible to meaningfully run an isolated test.

Fail condition

The measurement that would convince us beyond reasonable doubt that the hypothesis is invalid.

Less than X payments from Micro charities over one month.

Results

The results of testing the hypothesis.

X Micro charities paying
12.09.17 - start
...details...
11.10.17 - end

Next steps

The next step based on the result of testing.

See Hypothesis #231

3. Weekly review

We find a weekly cadence of updating Assumptions and Hypotheses based on the results of active experiments to be about right.

Establishing a schedule and consistent structure can help engage key stakeholders in the review process effectively - often adding valuable external insight that's difficult to attain for the project team alone.

Putting it into practice

The impact of revamping our innovation accounting has been really powerful - are we’ve seen results in the right direction quicker than ever before. We now have a common language to engage stakeholders in the product development process; and a single source of truth to ensure we're focussing on work that's most critical to the project's success. No more hunches, just log it as a Hypothesis, and review it with our team and advisors.

It's not without a cost however, and it's worth closing with some tips to help embed a test driven approach.

  • Overcome the initial setup cost - conducting user research to define Assumptions and the related Hypotheses is time consuming and mentally taxing. Make sure the project's schedule factors this in throughout the product’s development, not just at the beginning, and don't be afraid to hold off on delivery to establish a baseline.
  • Make results visible - share the results of your experiments with people outside of the project team to share the innovation burden and ensure unbiased review.
  • Review regularly - at least once a week, and if you're not seeing results change something.
  • Remember it's iterative - Assumptions and Hypotheses will evolve as your understanding develops and each time your project goes through the build > measure > learn cycle. Don't expect to get it right first time.
  • Avoid safe bets - working in the social sector it can be easy to become risk averse. Early on if proving or disproving a Hypothesis isn't likely to determine the fate of your project, then that optimisation isn't the most important.

Want more?

This is an evolving piece of work for us, and we'd love to speak with people facing similar challenges to help shape its next steps. Get in touch at hello@wearecast.org.uk.