Little bets in Customer Success

Artem Gurnov
CX@Wrike
Published in
8 min readJul 5, 2022

--

Working in the customer engagement team is all about constant innovation. We engage in new activities and try new approaches all the time in order to maximize the value our customers get from our product. As a result, when the renewal date arrives, if we do our job well, not much consideration is needed from the clients’ side to make a decision on whether they should renew their subscription. As I briefly mentioned in my review of the book Little Bets by Peter Sims, I was always curious about whether there’s any formal approach to innovation. I could not believe that innovation was only about people getting lucky every now and then to come up with something that the market really needed. And when I dived into the brilliant ideas and suggestions that the author had to offer about the innovative process, I immediately started thinking about how it all resonates with our own approach to innovation here at Wrike — and I’m very happy to state that there are a lot of similarities. Obviously, we’re still very far from the ideal state (if it can be reached at all in the first place), but we most definitely follow several core principles already. That said, in this article, I’m going to share some examples of the little bets we made here in our engagement team here at Wrike and how each of them ended up.

It’s imperative that I start by highlighting one of the key values of our customer success organization (CSO) here at Wrike, which is “fail gloriously.” This is not just another corporate slogan written on the company’s website or business cards — this is something that serves as a guiding principle for all departments in CSO. Failing gloriously is about continuous engagement in experiments, trying something new, failing, quickly getting up, and learning from our failures. In many companies (including some of the ones I’ve previously worked at) mistakes are punished. When employees try something new and fail, they’re being compared to others who did not fail. This sends a clear signal to everyone that initiative and new approaches are not welcome and that everyone should just stay within the guidelines advised by leadership. It’s very hard for innovation to be born in that environment. And I have to clearly state that all the big wins that our team has had are, at least, partially attributed to the fact that we were allowed and even encouraged to make mistakes and find out what wasn’t working. And boy, did we make a lot of mistakes. To wrap up this topic, here’s a fun fact to share: Every month at the global CSO meeting, we have time reserved to celebrate this month’s glorious fails. What, if not that, could serve as clear indisputable proof that this principle is something that our leadership team truly believes in?

Whenever we as a team engage in yet another experiment and try to test a certain theory or assumption, we tend to follow this algorithm:

Step 1: Define what we expect to achieve

It’s critical that everyone is aligned on what we consider a success and the indicators to tell us that we’re moving in the right direction. Obviously, a major component of a successful outcome is that it is measurable and trackable. A good example of a successful outcome would be “increasing the net retention by 2%”, while a not-so-good example would be: “increasing the overall customer satisfaction with reports.” While the latter could be an important initiative, it would need to be reformulated to be clear on how to track it and measure success (e.g., “increase the usage of reports feature by 20%”).

Step 2: Establish a timeframe

Every experiment we engage in has a clear timeframe. It could be a week, a month, a quarter, or sometimes longer. The factors defining the timeframe usually include the resources required, the number of business-critical objectives at any given moment, the initiative’s business impact, and, of course, common sense.

Step 3: Decide the necessary MVP level.

It would take a very long time to test different hypotheses if each of them included a super-detailed plan and a desire to get to an ideal polished state. Innovation is about producing the necessary minimum that the market would consume and provide feedback on. That said, when planning our activities, we accept from the very start that in the testing phase, they would be far from perfect but ready enough to evaluate whether a certain initiative was a success or failure.

Step 4: Execution and results evaluation

The next step is all about executing the plan and requirements established in the previous steps. Once a necessary amount of data has been accumulated, it is time to make a decision on whether the experiment was a success or failure. In the case of success, the next step would be to repeat the experiment one more time to confirm that it was not pure luck or an anomaly. If the second attempt also results in a success, then such an initiative gets described in a playbook and scaled for the whole team. If the experiment is considered a failure, then a report with insights and findings is created and shared with the team during their next meeting. Sometimes, the insights from failed experiments lead to great ideas.

Examples of our little bets

Little bet #1: New region development boost

When choosing which territories to focus on as CSMs, we take multiple factors into accounts, such as timezone, languages the team members speak, account size, and others. As a result, certain regions tend to get more support than others. We decided to see what would happen if we shifted the schedules of several experienced team members 1–2 hours forward, so they would have more availability for less supported regions and, in turn, if that would lead to better customer growth and development there. We decided to choose a one-month timeframe and selected two experienced CSMs for this initiative. The expected outcome was to grow X customers by a certain percentage.

Result: failure. It was clear that the amount of time we reserved for this experiment was far less than needed to make a statistically meaningful impact to continue this experiment. And since the other regions were demonstrating consistent growth, we decided to give up on this project for the time being. We did get a lot of valuable insights on the way though: Firstly, we tried different custom emails for outreach, and several of them had a noticeably better open/click rate, so we had the chance to use them with other customers. Secondly, it became clear that the region we tested in had a lot of potential and that we should most definitely continue trying new approaches there. We now plan to do more experiments there in the upcoming quarters.

Little bet #2: CS/PS hybrid function

As customer success managers, we’re used to conducting Q&A calls with our customers to address their challenges and maximize the value they’re getting from the product. At Wrike, Customer Success is a complimentary service for our customers and in a nutshell, it’s all about providing strategic guidance to our clients to assist them in achieving their Wrike-related goals and, as a result, ensure their retention. But if our customers need hands-on assistance, we offer paid professional services (PS) packages. Our theory was to check whether a CSM would be able to cover for the PS function for the most affordable PS package. Since the content discussed during that deployment type heavily intersects with the topics covered by CSMs, we decided to involve CSMs in delivering those PS calls instead of increasing the headcount.

Result: success. Not only did CSMs do an amazing job in delivering professional services, we also made the decision that for those customers, these CSMs would become their dedicated CSMs. As a result, we ensured better continuity in our relationships with these customers, no information loss about the clients’ challenges, and a lack of need to onboard new team members since these CSMs already had deep product knowledge and expertise.

Little bet #3: One-off webinars on Wrike mobile apps

Since many teams using Wrike are fully remote, and their employees are often mobile throughout the day, we thought it would be a great idea to drive the usage of Wrike mobile apps by conducting thematic webinars — one for iOS and another for Android. Our intention was to share specific use cases and best practices and, later on, to track which percentage of attendees’ teams starting using mobile apps at all or more frequently. The timeframe of this experiment was less than a month — it took us several weeks to conduct interviews with our mobile app product teams to learn every detail possible. We also conducted mini knowledge-sharing sessions with other CSMs and members of different customer-facing departments to learn more about how their clients were using the apps. Then the time was needed to develop and polish the content for the webinars and find the best way to screen share both from iPhone and Android for a live demo.

Result: failure. Even though we leveraged the help of our marketing team to do a great promotion of the webinars, it seemed that the topics simply didn’t hit the spot. We only had a limited number of attendees for each session. Almost no questions were answered, and we didn’t see much of a mobile app usage increase after the sessions. The main insight for us from this initiative was that we should definitely choose more general topics that have the potential to attract a wider range of audience members if we really want to drive change in the customers’ usage of our product. We also got several valuable pieces of feedback that gave us a better understanding of our customers and of which industries and markets are more likely to use the mobile apps.

Little bet #4: Product usage-based notifications for CSMs

Customer success managers in our engagement team are responsible for very large books of business, each including a minimum of 500 accounts. Obviously, in our situation, we need to be very efficient in how we approach clients, and every initiative should take into account the scale of our work. We were looking for a simple way for our CSMs to easily spot a decreasing usage trend so we could take the necessary action and mitigate the churn risk. Any manual approach was out of the question since CSMs would then need to spend dozens of hours on analysis, which was not affordable for our team. We decided to address this challenge with the creation of an automated system that tracked the product usage of each customer in the CSM’s book of business, and, should the usage drop by a certain percentage, the CSM would immediately receive a task in the CRM to engage the client.

Result: success. Implementation of this semi-automated tactic allowed CSMs to spend less time prioritizing work and more time on actually engaging the customers who really needed their help. Certain change management issues were of course, involved because new routines associated with reviewing the tasks on a daily basis had to be built. But overall, this bet had a noticeable positive impact on the overall retention of customers in the segment our team is responsible for.

These are just a few examples from our experience. But as you can see, in order to innovate and improve processes, it’s not necessary that each idea be a major breakthrough that would fundamentally change the way the company operates. Each bet can actually be something very small and not time-consuming. But when the effect of multiple bets hitting the spot is aggregated, the overall impact on the organization may be groundbreaking.

--

--