rw-book-cover

Metadata

Highlights

  • Not too long ago, we dedicated a 6-week cycle to improving Basecamp’s onboarding flows. The aim was to increase conversion from trial to paid by smoothing out the initial experience of getting going, doing a better job of quick-teaching the basics, and making a few things a little bit easier each step of the way. (View Highlight)
  • At a high level, these were the projects in that 6-week period: • Adding a sample Getting Started project with steps and basic education. • Streamlined creating a new project (one step instead of the previous multi-step), plus exposing all of the tools up front. We also dropped the guided-help option we used to have. • Revamped and simplified the blank slates introducing tools that haven’t been used yet. • Refreshed the sample project for creating a podcast (great cross-functional example people could relate to). • Sped up creating a new account (people used to have to wait a few seconds while the sample projects were generated). • Added an email reminder that the trial was ending soon. • Dropped the other sample project so we didn’t overwhelm with examples. • Rewrote the the Hey! menu onboarding messages/tips so they were easier to follow. The result? A huge 30% increase in conversion. 30%! (View Highlight)
  • As anyone who works in this field knows, conversion improvements (without tricking people) are usually a grind. You’re typically thrilled to see any results, and often have to parlay small single digit improvements into more single digit improvements, hoping to eventually hit double digits. Yet somehow, this time, we managed to find our way to a 30% increase. And we have no idea how. And we don’t care to find out. (View Highlight)
  • What was it exactly? Was it just one of the things that really mattered? All of the improvements together? A handful of small improvements that tipped into something bigger? Don’t know, don’t care. We spend six weeks on the work, we did our best, and something worked out really, really well. We’re thrilled with the outcome, and that’s enough for us. That was the point in the end, wasn’t it? (View Highlight)
  • You could make the argument that we should have tried each thing separately, measured each impact, and then decided where to go next (or known when to stop). You could make the argument that changing so many things at once makes it impossible to know which variables actually mattered. You could have argued we should have been more rigorous in our evaluation so we could learn something fundamental we could apply to a future project. (View Highlight)
  • You could argue all those things. And while you were arguing those things, or taking months to tease out answers, or trying individual things in high-traffic succession to make sure each was statistically significant, we were already basking in the results, and moving our product teams on to the next project. In six weeks all the work was done, we did our best, and it worked. (View Highlight)