For the past 10 years, I’ve been having conversations with people about retention metrics. It is a really challenging thing to work on for a couple of reasons, none of which are technical!
Any time an organization wants retention metrics, I’ve got to dig in to what they mean when they say retention. Is a donor retained if they give once each calendar year? Once each fiscal year? Is a donor retained if they give every 18 months? Do you want to calculate different retention rates for recurring and one time donors? There are even organizations that calculate retention based on the supporter’s original cohort. In that scenario, if someone donates for the first time in 2010 and have given 14 of the past 15 years, they were considered lost in the year they missed, and the retention calculation never picks them up again!
Once we’ve had some initial conversations about what they mean when they talk about retention, we can move on to specific ways to calculate retention. There are several reasonable, widely-used formulas, and from my point of view, it doesn’t matter which one you choose as long as you can stick with it for a while and can explain it to people in your organization.
Salesforce has a post about one formula here. Another option is here. My personal favorite is the News Revenue Hub’s. All of these are good, practical options. The big thing organizations need to avoid is making the calculation overly complicated or manual, which tends to make the results inconsistent over time. The goal is to be able to see the direction retention is going over time and take action based on that, so consistency is key.
Recently I worked with a team that was tracking retention, but also wanted reports on their retention successes and failures over the last quarter for each portfolio. This was not at all about a rate. This was an effort for portfolio managers to look at their donors and understand if those donors were ahead, behind, or even with their giving in previous years. This particular organization has donors who give generously but chaotically, so defining ahead and behind was challenging.
Getting to a point where we could put this information in front of portfolio managers took several iterations, and it took real time from the portfolio managers to dig into the results of each experiment and provide feedback. Ultimately we had to return to the original goals several times to refine our results and come up with an algorithm that would classify donors in the portfolios in a way that was most helpful to the portfolio managers.
So far, this work has involved exporting some data from Salesforce once a quarter and re-running a Jupyter notebook that does the analysis. The result is shared with the portfolio managers in a Google Sheet. We are considering implementing the Jupyter notebook analysis in Salesforce in the future, but haven’t made that leap yet. Jupyter notebook has been a faster, more lightweight way to work on this while we experiment.
This work has been a lot of fun, and I’ve been grateful to have portfolio managers who have been willing to dig into the details and provide regular feedback.