Objectively Making Product Decisions

Posted on Updated on

by Joe Stump

Deciding which mixture of features to release, and in what order, to drive growth in your product is difficult as it stands. Figuring out a way to objectively make those decisions with confidence can sometimes feel downright impossible.

On November 12th, we released Sprint.ly 1.0 to our customers. It was a fairly massive release with core elements being redesigned, major workflows being updated, and two major new features. The response has been overwhelmingly positive. Here’s an excerpt from an actual customer email:

“Well, I’ve just spent some time with your 1.0 release, and I think it’s wonderful. It’s got a bunch of features I’ve been sorely missing. To wit:

  • Triage view – a Godsend or, no he didn’t?!
  • Single-line item view – where have you been all my life?
  • Convenient item sorting icons – OMG, how did you know?
  • Item sizing, assigning, following icons everywhere – spin us faster, dad!

I’m sure there are a ton more, but these are great improvements.”

Yes, how did we know? I’m going to lay out the methodologies that we used at Sprint.ly to craft the perfect 1.0 for our users. It all begins with a lesson in survivorship bias. In short, survivor bias, as it applies to product development, posits that you’re going to get dramatically different responses to the question “What feature would you like?” when asking current customers versus former or potential customers.

LESSON 1: OBJECTIVELY EVALUATE YOUR EXIT SURVEYS

You do have an exit survey, yes? If not, stop reading this now, go to Wufoo, and set up a simple form asking customers who cancel their accounts or leave your product for input on why they left. You can take a look at ours for reference.

The problem with exit surveys and customer feedback in general is that everyone asks for things in slightly different ways. Customer A says “Android”, Customer B says “iOS”, and Customer C says “reactive design”. What they’re all really saying is “mobile”. Luckily, human brains are pretty good pattern recognition engines.

So here’s what I did:

  1. Created a spreadsheet and put groups along the top for each major theme I noticed in our exit surveys. I only put a theme up top if it was mentioned by more than one customer.
  2. I then went through every single exit survey and put a one (1) underneath each theme whenever an exit survey entry mentioned it. I’d put a one under each theme mentioned in each exit survey entry.
  3. I then calculated basic percentages of each theme so that I could rank each theme by what percentage of our former customers had requested that the theme be addressed.

Here’s the results:
Now I know you don’t know our product as well as yours so the themes might not make much sense, but allow me to elaborate on the points that I found most interesting about this data:

  • Our support queues are filled with people asking for customized workflows, but in reality it doesn’t appear to be a major force driving people away from Sprint.ly.
  • 17% of our customers churn either because we have no estimates or they can’t track sprints. Guess what? Both of those are core existing features in Sprint.ly. Looks like we have an education and on-boarding problem there.
  • The highest non-pricing reason people were leaving was a big bucket that we referred to internally as “data density” issues.

After doing this research I was confident that we should be doubling down on fixing these UI/UX issues and immediately started working on major updates to a few portions of the website that we believed would largely mitigate our dreaded “data density” issues.

But how could we know these changes would keep the next customer from leaving?

LESSON 2: IDENTIFY WHICH CUSTOMERS WERE LIKELY CHURNING DUE TO “DATA DENSITY” ISSUES

We store timestamps for when a customer creates their account and a separate for when they cancel their account. This is useful data to have for a number of reasons, but what I found most telling was the following:

  1. Calculate the difference between when accounts are created and cancelled in number of days as an integer.
  2. Sum them up and group them by month. e.g. 100 churned in the first month, 50 in the second, etc.

You should end up with a chart that looks something like this:

It shouldn’t be surprising that the vast majority of people churn in the first two months. These are your trial users for the most part. The reason our first month is so high is another post for another day. What we’re really wanting to figure out is why an engaged paying customer is leaving so let’s remove trial users and the first month to increase the signal.

We get a very different picture:
In general you want this chart to curve down over time, but you can see Sprint.ly had a few troubling anomalies to deal with. Namely, there are clear bumps in churn numbers for months 5, 7, and 8.

We had a theory for why this was based on the above survey data. A large part of the “data density” issues had to do with a number of problems managing backlogs with a lot of items in them. Was the large amount of churn in months 5-8 due to people hitting the “data density” wall?

LESSON 3: TESTING THE THESIS

So far we’ve objectively identified the top reasons that people were leaving Sprint.ly as well as identifying a few anomalies that might identify customers who are churning for those reasons. Now we needed to verify our thesis and, more importantly, show those customers what we were cooking up and see if our update would be more (or less) likely to have prevented them from churning.

To do that we turned to intercom.io and set up an email to be sent out to customers that fit the following criteria:

  • Had created their account more than 4 months ago.
  • Had not been seen on the site in the last 2 weeks.
  • Was the person who owned the account.


I also sent this email out manually to a number of customers who fit this profile that I was able to glean from our internal database as well. I got a number of responses from customers and was able to schedule phone calls with a handful of them.

From there it was a matter of showing our cards. I would hop on Skype and walk through the new design ideas, what problems we were trying to address, and asked whether or not these features would have kept them from leaving in the first place. Luckily, we had been closely measuring feedback and were pleased to find out that our efforts were not lost and that they did indeed address a lot of their issues.

CONCLUSION

Making product decisions based on customer feedback can be difficult. The more you can do to increase signal over noise, gather objective metrics, and distill customer feedback the better. It’s not always easy, but it’s always worth it.

About The Author, Joe Stump

Joe is a seasoned technical leader and serial entrepreneur who has co-founded three venture-backed startups (SimpleGeoattachments.me, and Sprint.ly), was Lead Architect of Digg, and has invested in and advised dozens of companies.