Preparing for Change

A few things recently have made me rethink my role with my clients recently in ways that I think are really interesting. One thing that has influenced me is the excitement coders have had in the last few months about the capabilities of AI generally, but specifically Claude. Another is the growing and improving list of tools that nonprofits are finding useful for raising money and tracking data. When I started working with nonprofits and their data about 13 years ago, the landscape was very different. 

I’ve never thought of my job as Salesforce-specific, even though I have spent a lot of the last 13 years working inside of Salesforce. When I started, Salesforce’s API and ecosystem really didn’t have any competition in the nonprofit space. That’s not true anymore, which is wonderful. Realistically, I think most of the organizations that I work with that are using Salesforce today won’t be using it in 10 years. I’ve cleaned up a lot of failed migrations in the past, and I don’t want my clients to experience anything like that as they make changes in the future.

How does all of this effect my work? For one, when organizations come to me when they’re still exploring their options, I don’t push Salesforce at all. I’m happy to speak to the pros and cons and try and help them figure out what their specific pros and cons will be, but I don’t want to be the person who talked a nonprofit into Salesforce in 2025. I haven’t seen a CRM that is clearly better for most organizations, but it is easy to imagine one or two emerging in the next few years. 

Another way this is effecting my work is that I want my clients to be prepared for big changes in the next few years. I don’t think any of us know what that those changes will look like, but there are some things to think about and focus on that can make any organization better prepared. I charge by the hour and don’t ever just go off doing work for my clients unless they want to pay for it, but it doesn’t cost them anything extra for me to approach the work with an updated mindset. With that in mind, here are some things I’m trying to think about when I help clients make decisions about what to invest time and money in.

Examining Complexity

All organizations have some complexity that is very specific to their own program, and supporting it requires investment of time and money. Whenever we’re talking about investing more time and money into supporting complexity, I want to make sure this is complexity that the organization really needs. Sometimes the complexity makes a big difference to the program and needs to be preserved. Sometimes it hasn’t been thought about for years and can be simplified. I’m not against continuing to invest in complexity in existing systems, but I want to make sure organizations consider how likely they are to still want that complexity in a few years to be part of their prioritization process.

Clean Data

While I don’t want to make many predictions about what the landscape will look like 10 years from now, I feel pretty confident saying that clean data will still be really important. Lots of change gets bogged down in dirty data, and I’m guessing the changes we’ll see in the next few years are no different in that way.

Explaining Data

Talking about data with people who think about other things all day can be tough. People talk about how important documentation is a lot, but frequently people don’t talk about the kind of documentation that explains the data to people who aren’t familiar with it. That kind of documentation is the same kind that explains data to systems that aren’t familiar with it, whether that’s a new integration or a migrating to a different CRM or to something that isn’t a CRM but that replaces CRMs. The more we write down what data is important to us, where it lives and how it gets updated, the easier the changes will be.

Explaining Functionality

It is impossible to effectively change the tools you’re using if you don’t understand what your current tools are and aren’t doing for you now. This is also documentation, but most documentation doesn’t come at things from this angle. Lots of organizations have documentation of what project has been undertaken and even how the project was implemented, but documentation that explains what functionality an organization has an why is less common, but more useful for planning and evaluating change and understanding how effective an updated system is compared to the old one.

Logic Location

I’m using logic here as a shorthand for anything other than tables of data. It might be code that transforms Stripe transactions into a donation record, it might be automations that calculate a donor’s membership level, but all of that has to live somewhere. It has always been important to put that logic in the place that is the most maintainable and best able to handle the job. Now I think they should also be thinking about what location is going to be the easiest to migrate to something else.

Centering Humans

This last one isn’t a change for me at all, but I mention it because I think it is an important part of thinking about change. Even if you’re an executive dreaming about laying off all the humans who work for you, some part of your work is still ultimately about humans. Nonprofits serve humans, and nonprofits need humans to support them. When we’re thinking about what to invest time and money in and what is going to still be important years from now, I want to always come back to: what is going to work for the humans? 

Preventing Massive Data Messes

I’ve been hired several times to clean up a database that is a big mess. I really enjoy these projects, and I’m really interested in the patterns that show up across these projects.

Unsurprisingly, one thing most of these projects share is that work was done in the past without a well though out plan. The internet is full of explanations about how a good, written plan pays off for everyone involved, so I won’t repeat those here. They’re true, though! Without a plan, everyone is just a six year old throwing their foot up as high in the air as they can to try and kick the soccer ball. With a plan, any of us have the potential to be the older, better player who can stop the ball and decide where to kick it. 

Another thing these projects often have is that the person closest to the data has developed a focus on edge cases in the data. I suspect I’ll never figure out whether the person shows up with a focus on edge cases as part of their personality or if the environment creates that in the person. However it happens, once the person who is closest to the data has this focus, things begin to warp around that. Part of cleaning up the system is to refocus the whole team.

Most systems I’ve worked in have a lot of fairly typical data – for instance, online donations. These records get created automatically and are generally consistent. There can be all sorts of issues with this kind of thing, but that’s a topic for another day. Most systems also have a small set of records that are truly atypical and that need some handling that the typical records don’t. 

A team needs to recognize that the first priority is to get the typical data right, and that this happen with as little human intervention as possible. A team also needs to be able to talk about typical vs. atypical data and recognize that there’s a difference! I’ve had several experiences where people have simply stopped seeing the typical data. I don’t necessarily want people to spend more time on the typical data, but I do want them to understand it and see how it is being handled automatically. 

I think that saying people need to recognize something and talk about it sounds pretty squishy, but the reason it matters is that when we start talking about atypical data, the easiest thing in the world is for the team to ask for a change based on atypical data, and in the process break the things that are working for the typical data. 

If teams were incredibly precise in their requests, perhaps this wouldn’t be an issue. If people implementing the changes were fully aware of the typical and atypical data and how the team worked day to day and felt comfortable saying, “actually, your request isn’t possible without breaking something more important,” perhaps this wouldn’t be an issue. But usually a team is making somewhat imprecise requests and handing the off to implementors who are missing a lot of context and who are incentivized to just complete the change rather than to figure out if it is a good idea or not.

If you’re stuck in this cycle, there are a couple of questions you can ask an answer that may help people refocus. 

  • Can you personally explain typical vs. atypical data in your system?
  • What works for typical data?
  • What doesn’t work for typical data?
  • What kinds of data cause your team the most problems? For each of the kinds of problems you’ve talked about recently, do you know what percentage of your records are like that? I once worked with a team that was spending a massive amount of time on records that felt very common to them because they spent so much time on them, but were in fact vanishingly rare in their system. Just pointing out how few of them there were changed the conversation entirely.

When cleaning up a troubled system, I like to start with the typical data and get those integrations and automations working really, really well so that people can stop working on them and just use the data. Doing this usually turns up some things contortions in the automations for atypical data, and those contortions seem to always cause problems for all the data – not to mention how difficult they are to maintain over the long-term.

Once the typical data is in order, then we can get to how to handle atypical data in the simplest possible way, with a plan and documentation so that we can maintain it as atypical things evolve. When dealing with atypical data, I like to start with the highest volume issues and move towards the lowest volume issue. If you’ve got some type of atypical data that someone has to spend 10 minutes a quarter updating because there’s no automated way to handle it, that’s probably not even a problem we should solve, and once we’ve gone through this process, nobody even has to say that out loud because it is so obvious. When that 10 minutes is just part of a constant avalanche of manual updates, it seems like a bug. 

Timeline Display

Salesforce announced Nonprofit Cloud a while back and lots of organizations now have to decide if they’re interested in migrating or not. For an organization with a mature NPSP instance, the cost of moving to Nonprofit Cloud is significant. I did have a user let me know that they found the timeline feature (described here in the Donor Engagement) section very appealing. I can see why! This particular organization is in no position to move to a new product right now, and so I wondered how easy it would be to build something quickly that would give them “good enough” access to similar functionality. So many non-profits are moving so fast and with such limited resources that I like to at least look at good enough options before choosing something much more expensive and time-consuming.

While the solution I came up with lacks the really nice visualization of Nonprofit Cloud it is extremely fast to implement. If used in combination with some field value that indicates a Task or Email is important enough to belong on the timeline, it could provide users with a lot of value for very little investment of time or money.

First I created a custom object called Timeline Entry. These records don’t actually get saved.

Then I created a Screen Flow, available on the Account page. It gets Tasks, Opportunities and Campaign History for the Contacts in the Account and displays them in chronological order. I was not able to add Files to this view. In order to easily search for Campaign Member records, I added new fields to that object for Campaign Name and Campaign Start Date. Once those records are retrieved, it assigns them to Flow variables that are Timeline Entry records. A Data Table widget in the screen flow displays the Timeline Entry records in chronological order, with links to those records. There’s no DML step to save the records, they’re just assigned to a Flow variable.

The first half of the Flow does have two DML elements in a loop, which is terrible, but for an organization where Accounts almost never have more than two Contacts, this won’t be an issue. In general, if these DMLs hit limits, that probably means the timeline would have been too verbose to be useful, anyway! Any of these Get Records elements could have filters that would ensure the results are manageable, both for the Flow limits and for the user who wants a quick overview.

Note that depending on how the organization uses Tasks, you may not need to get Contact and Account Tasks, or you may need to get them all and then de-dupe the collection – it just depends on the underlying data.

The second half of the Flow uses Transform to assign values to the Timeline Entries and put them in a collection to display.

June 2025: Experimenting with Data Analyses

There’s a whole set of data analyses that can and should be done at almost any nonprofit, and getting those things set up and communicating about them is a priority when an organization is setting up a new system. Once that’s done, however, a lot of fundraisers have some very specific things they want to support their work.

Over the past six months, I’ve worked with a team of major gift officers to figure out how they can best understand their progress over the previous quarter. On a quarterly basis, they wanted to see which of their donors had fallen behind on their typical giving and which donors had stayed current for their typical giving.

We started by talking about various things we could implement fairly easily in Salesforce, like quarterly rollups or matrix reports with quarterly results. Discussing those things in the abstract didn’t get us very far, we had to implement some quick and easy things to give people something concrete to work with. We did that, prototyping some simple things in production and letting them experiment. What we learned is that their definition of current and behind was idiosyncratic, but also really useful for managing their portfolios.

None of the simple experiments did what they wanted, but the conversations around those things really increased the specificity of what they were asking for. These major gift officers really stuck with the conversations even when it was time-consuming for them, and I’ve been grateful to them the whole time. After a few rounds of experiments, they were able to clearly articulate that they wanted a quarterly classification of donors’ movement based on a comparison of the donor’s previous 4 quarters of giving. The details of what they wanted were too complex to quickly prototype in Salesforce, and so we agreed to give it a try outside of Salesforce to both see if we could develop an algorithm that could consistently classify donors the way these major gift officers envisioned. I used a jupytr notebook to take data once a quarter and classify the data, and then provided a spreadsheet of donors and their classification back to the major gift officers. This was a great way to identify edge cases, and there were a lot of them! This organization has a lot of donors whose giving patterns are pretty irregular, and accounting for all of that took a few iterations. We did land on a system that they were happy with, though, and now we have clear instructions for a Salesforce implementation. It will be more complicated than anything we could have quickly built in Salesforce, but it is doable, and now we know the effort is worth it!

Something I always worry about with complex Salesforce requests is that the team making the request will get what they’ve asked for and then never use it. I’ve seen organizations spend tens of thousands of dollars on complex solutions that it turns out they didn’t really need and never use. Prototyping – even outside of Salesforce – allows people to reflect on what they think they want without such a big investment.

For me, the whole experience reinforced how important experimentation is, and also how much the users need to participate in a process like this. We could never have understood the many edge cases without the major gift officers committing a good chunk of time to going through lots of examples and providing substantive feedback to each iteration.

April 2025: Retention

For the past 10 years, I’ve been having conversations with people about retention metrics. It is a really challenging thing to work on for a couple of reasons, none of which are technical!

Any time an organization wants retention metrics, I’ve got to dig in to what they mean when they say retention. Is a donor retained if they give once each calendar year? Once each fiscal year? Is a donor retained if they give every 18 months? Do you want to calculate different retention rates for recurring and one time donors? There are even organizations that calculate retention based on the supporter’s original cohort. In that scenario, if someone donates for the first time in 2010 and have given 14 of the past 15 years, they were considered lost in the year they missed, and the retention calculation never picks them up again!

Once we’ve had some initial conversations about what they mean when they talk about retention, we can move on to specific ways to calculate retention. There are several reasonable, widely-used formulas, and from my point of view, it doesn’t matter which one you choose as long as you can stick with it for a while and can explain it to people in your organization.

Salesforce has a post about one formula here. Another option is here. My personal favorite is the News Revenue Hub’s. All of these are good, practical options. The big thing organizations need to avoid is making the calculation overly complicated or manual, which tends to make the results inconsistent over time. The goal is to be able to see the direction retention is going over time and take action based on that, so consistency is key.

Recently I worked with a team that was tracking retention, but also wanted reports on their retention successes and failures over the last quarter for each portfolio. This was not at all about a rate. This was an effort for portfolio managers to look at their donors and understand if those donors were ahead, behind, or even with their giving in previous years. This particular organization has donors who give generously but chaotically, so defining ahead and behind was challenging.

Getting to a point where we could put this information in front of portfolio managers took several iterations, and it took real time from the portfolio managers to dig into the results of each experiment and provide feedback. Ultimately we had to return to the original goals several times to refine our results and come up with an algorithm that would classify donors in the portfolios in a way that was most helpful to the portfolio managers.

So far, this work has involved exporting some data from Salesforce once a quarter and re-running a Jupyter notebook that does the analysis. The result is shared with the portfolio managers in a Google Sheet. We are considering implementing the Jupyter notebook analysis in Salesforce in the future, but haven’t made that leap yet. Jupyter notebook has been a faster, more lightweight way to work on this while we experiment.

This work has been a lot of fun, and I’ve been grateful to have portfolio managers who have been willing to dig into the details and provide regular feedback.

March 2025: More Recurring Donations

My update in January was about things that were cropping up around Enhanced Recurring Donations, and while I continue to find bits and bobs that are a bit annoying, I don’t have anything new and interesting to report on that front. I have been spending a lot of time deep in Recurring Donations, however, but at a different organization.

Back before NPSP existed, it wasn’t entirely obvious how to represent recurring donations in Salesforce! People tried some things. I’m currently working on overhauling one of those experiments that hasn’t aged well. I don’t think anyone involved in the design was incompetent, I just think it was a really difficult thing to understand all the consequences of since nobody had lived with any solution for any length of time!

What that means, however, is that we’re taking data that is radically different from Recurring Donations and child Opportunities and transforming it so that the organization can use NPSP Recurring Donations. We’ve now practiced twice exporting the data, deleting it from the sandbox, transforming it using pandas and then importing it back into Salesforce. I’ve never done such a radical transformation (this is way bigger than migrating from Raiser’s Edge or something like that,) and it has been very interesting.

One big lesson is that the Bulk API option in Data Loader makes something like this possible. I’m importing over 3 million new records, and we’re trying to keep our instance’s downtime to a few hours. We should be able to do that because of that Bulk API feature.

Another big lesson for me personally is that pandas is really well suited to this kind of work. I picked up pandas based on one (very wise) person’s somewhat offhand suggestion years ago, and I keep telling myself I should learn some other tools, but somehow the work I’m doing always seems to be something pandas is quite good for. Being able to do these pretty elegant transformations of 3 million rows on my laptop is great! (I’ll admit it is a very nice laptop.)

January 2025: Enhanced Recurring Update

Back in September, I posted about a recent upgrade I’d done from Legacy Recurring Donations to Enhanced. Since then I’ve learned an additional lesson that seems worth writing down.

I started noticing that we had some Recurring Donations that were getting their future pledge created with the wrong Close Date. Some yearly Recurring Donations had multiple open pledges for the same calendar year, which caused problems in pledge reports! Some monthly Recurring Donations also had two Opportunities for one month.

This effected a pretty small number of our total records, but it was pretty annoying.

It turns out that the Effective Date value is extremely important in creating future pledges. The upgrade guide doesn’t mention this. The guide just says that Effective Date is changed by the data migration process, and that it will be set to the earliest Opportunity’s Close Date. I do not think that’s what happened in our case!

When I look back at our pre-migration data, I can see that we didn’t use the Effective Date field until November of 2019. That’s long enough ago that I don’t remember what changed then, but it does mean that most of our active Recurring Donations entered the migration process with a value in that field. The ones that didn’t seem to be the ones that needed to be repaired after the migration. It would have been great if the data validation had insisted I populate that field before the data could be migrated! I can’t figure out where the Effective Date value came from for these records, but it sure wasn’t correct! The day of the first Opportunity Close Date would have worked perfectly.

We also hadn’t used Day of Month before the migration. That value got populated in the migration, and in most cases it was fine, but I think it must have been using Effective Date to determine that value.

I figured this out only because when repairing these records, my updates didn’t stick until I also changed Effective Date.

I’m guessing we were fairly unusual to have active records with no Effective Date or Day of Month, but I’ll definitely keep it on my list of things to check for future migrations!

December 2024: Just because you can doesn’t mean you should

People I work with frequently assume I work for Salesforce or have worked for Salesforce in the past, and that I’m always going to recommend Salesforce’s solution. None of those things are true! I started using Salesforce (Sales Cloud) because it was the right tool for a particular job. Like most people in the Salesforce ecosystem, I’ve never worked for Salesforce and don’t plan to.

A lot of my Salesforce work involves making it an efficient place for users to do a lot of their work and so I often find myself advocating to put some bit of data into Salesforce rather than leaving it in Airtable or a Google Sheet or someone’s notebook. But sometimes data doesn’t belong in Salesforce, and a recent experience made me realize it can be useful to think through these things!

The same organization that has that stores extensive survey data in Salesforce was planning a new online form. The information they were collecting was very different from the survey data and overlapped with the data they have in Salesforce, but was also pretty different. We met to talk about whether or not the FormStack + Salesforce solution that has worked well for the survey data was the right approach with this new project.

We considered a few factors and eventually decided to not use Salesforce for this data. We expect to import a little bit of the data involved in this new project to Salesforce once the project is complete.

Arguments for using FormStack + Salesforce

  • We’re familiar with both tools
  • The experience of the person submitting the information would be good
  • We’d be able to store the data in a way that would improve our full understanding of some people and organizations already in Salesforce

Arguments against using FormStack + Salesforce

  • Once the data is submitted, the people who need to work with it are not Salesforce users, and really shouldn’t be Salesforce users. If we made them Salesforce users, we’d have to pay for their licenses for a year and we’d have to set up new profiles and permissions for them.
  • Most of the submitted data doesn’t overlap with existing data, and we’d have little use for it in the future.

And so we decided not to use Salesforce for this project! As of today, small to medium nonprofits who need to be able to connect other systems to their CRM have trouble finding a better choice than Salesforce’s Sales Cloud product. I expect that will change in the future and I’ll happily switch to a better tool for the job when it exists.

September 2024: Enhanced Recurring Donation Migration

If you started using Salesforce’s NPSP a while ago, you might have used what are now called Legacy Recurring Donations (LRDs.) They provided some great functionality but there was a fatal flaw (see below) at the center of the implementation. It went wrong one day in March 2018, and sometime after that Salesforce decided to overhaul Recurring Donations. That overhaul is Enhanced Recurring Donations (ERDs.) Salesforce has said they won’t be improving LRDs, and so most organizations who use Recurring Donations will want to migrate at some point.

Salesforce has provided some tools to make migrating more practical. I’ve had a good experience with those tools, but the migration is still a real process. I recently completed my first big production migration and since I couldn’t find many accounts like this, I figured I would publish mine!

The organization I migrated is one that has been using NPSP and Recurring Donations since 2015 and had about 6,000 active Recurring Donation records when we did the migration. For a variety of reasons we had set Recurring Donations to have 36 months of future pledges rather than 12 months, so we had a lot of Opportunities.

Should you migrate?

I think migrating is worth the effort. I also think you probably can’t migrate safely unless you have recurring donations controlled outside of Salesforce. That’s true of most organizations, but if some external processor looks to Salesforce to charge or not charge a donor, touching Recurring Donations at all is high risk, and migrating is a lot of changes!

Salesforce’s documentation for this process is good, and the tool they’ve built worked really well for us. We wound up needing a few hours of downtime on a Saturday that didn’t both our users.

The big lesson I take away from our experience is that this should not be attempted without a full sandbox. I know a lot of small to medium NPSP organizations don’t want to pay for it, but I’d really encourage you to do that. Unless you’ve practiced several times in a full sandbox, you have no idea how long the downtime will be while you migrate. Our downtime wound up being 3-4 hours, which we scheduled on a Saturday afternoon and prepared everyone for for months.

For us, the key thing we did in the full sandbox was to work through the many, many validation errors the migration tool encountered. All of them were solvable, but many of the changes required to our data were precisely the kind of changes that LRD wasn’t great at dealing with, so being able to make those changes in the Sandbox, verify that they addressed the validation error and then to verify that those data changes didn’t have other side effects was fantastic. I’m sure some organizations will have minimal validation errors, but we had to update thousands of records in order to pass that stage of the migration, and some of the fixes took several tries. (More on that below.)

How to make the business case to migrate

Less Fragile Data: Particularly if a full sandbox is a new expense, you’ll probably need to make the case that migrating is worth the time and money. We’d found over the years that LRDs were more fragile than any other record in Salesforce. I’d had to set things up so that users could only edit Recurring Donations via a few specific quick actions and I used several reports to constantly monitor for issues. The total number of problems we had over the years was small, but when the data was bad it was very bad! I explained to everyone on the team that our total cost of ownership for Recurring Donations would be lower once we’d moved to ERDs and users would be happier.

Less Clutter: This organization found LRDs to generate a lot of clutter, and it was worse because we were creating 36 months of pledges rather than the typical 12. The cleaner ERD model with one pledge at a time appealed to users.

Easier automations: Like most organizations, we have some automations that run when a new Opportunity is created. With LRDs, each time a monthly record was created, that meant 36 Opportunities! Creating fewer Opportunities meant a lower risk that these automations would hit limits.

Supported Functionality: Of course we want to use a supported feature set from Salesforce that will get improvements!

Validation errors

While we only had about 6,000 active LRD to migrate, we had lots of LRDs that were closed, and those also had to be migrated. We ran into a variety of validation errors, some of which were easy to fix, and some of which weren’t! While I found that the documentation for migrating was pretty good, I don’t think they could anticipate all the different errors people might run into.

A big one I ran into was that we had a lot of LRDs that had a number of planned installments that was smaller than the number of paid installments. They were all closed. I am pretty sure these originated from LRDs being created with a zero in the planned installments field rather than null. (Just one thing to hate about LRDs!) I hated to bulk update this field because it is one of the fields I’d found to be so problematic with LRDs, but I did make that fix and everything worked out well.

The other troublesome error was about Open Ended Status being Closed and Schedule Type being blank. In that case, the migration tool requires that the number of planned installments be 1. Schedule type was my other most-hated LRD field, so I guess I shouldn’t have been surprised that this was an issue. I did do this bulk update and it worked out fine.

Projected revenue reports after migration

One really nice thing about the future pledges that LRD uses is that it makes it so easy to create reports for projected future recurring donations. While ERD does provide some useful functionality for this, I found that we needed an additional field as well as a lot of new reports to be able to do this. Because our users were accustomed to seeing Opportunity reports with pledged and closed won Opportunities, particularly for the current calendar year, I created a report that allowed them to continue to see similar reports, but with a new field called CY Dashboard Value. You can think of this as CY Pledged Value, but for this particular group of users, the term dashboard is useful. This let them continue to separate pledged vs. closed won revenue for the year in one report, and to get a total of both.

The field is a formula that is the Amount value for one time or closed won gifts. If the gift is pledged, it is the amount for all future payments for the rest of the calendar year.

if(
or( ISBLANK(npe03__Recurring_Donation__c), ispickval(StageName, 'Closed Won'), Recurring_Donation_Frequency__c = 'yearly' ),
Amount,
(Amount * (13 - month(CloseDate)) ))

Updating reports in a Salesforce instance that is 9 years old is always a challenge! I let users know that they could look for a ✅ emoji in the report description to indicate that the report had been updated after the migration. Our most used reports were all updated as soon as the migration was finished.

* The fatal flaw in Legacy Recurring Donations

At the top of this post I mentioned the fatal flaw with LRDs. I’d used LRDs in multiple organizations for years when suddenly one day something weird happened to several hundred Recurring Donation records overnight in one of those orgs. I was absolutely certain none of us had updated the records that had changed, and we had no automated processes in place ourselves that were capable of doing such a thing. As I dug into the issue, I found out all kinds of things, but the big thing I discovered in the ensuing conversations with Salesforce support is that LRD Opportunities are deleted and recreated with fake created dates every night.

On one very unlucky night in March of 2018, this process made some mistakes, changing some close dates and amounts. Not all of them, just some. In some Salesforce instances nothing was changed. I never figured out a pattern to these changes, but wound up having to review millions of records to see if they’d been changed by this process that ran amok. It was absolutely awful and Salesforce really didn’t care.

I was on the phone with someone at Salesforce when they (I think accidentally) said out loud that all the Opportunities were being deleted and recreated every night, and I felt like I’d had the breath knocked out of me. I tried to imagine what would happen if one of us outside of Salesforce submitted an app for review on the appexchange that did such a thing! I’m still pretty speechless about it. I did confirm with them that ERDs do not do this.

In retrospect, I guess the amazing part is that this worked as well as it did. But it obviously isn’t a great way to set things up, and it meant that every night the code was trying to be very clever about “fixing” Recurring Donation records, and it shouldn’t surprise any of us that sometimes it got too clever – especially with fields that were interrelated and confusing for users.

June 2024: Starting Simple

I’ve worked on several projects this year where the big goal is pretty complex and everyone involved knows it will take a lot of work. The typical way to approach something like this is to understand the requirements for the entire project and then build, test and declare the project finished. Most consulting companies can’t be financially successful unless they approach projects this way.

I get hired pretty regularly to clean up behind this approach, and so I tend to think it doesn’t work very well. Of course, I’m working with very skewed data. If you’re already happy with your system, you’d never contact me!

The big goal for any system should be that it works for the all humans involved. Sometimes the requirements as communicated early on reflect that, and sometimes they don’t. Every time I go to the doctor and watch everyone there struggle with their Epic implementation, I think about how when these health systems purchased it, the billing team gave really clear instructions about what they wanted, and the doctors and nurses didn’t, probably because billing people had lots of experience writing software requirements and doctors and nurses had none.

A new system almost always demand that those humans change some behavior they’re very accustomed to. The technology part is the easy part. If you’re not acknowledging this in your process, I’m a lot more likely to get hired to clean up behind you.

I really like to start projects of any scope with a simple step that puts something concrete in front of the staffers, who are typically the first group of humans who will interact with the system. For a data migration, this might be just migrating the contact information and the organization or household information, and sometimes just a portion of it! For a from-scratch system, it might be a bit of data to solve a single problem they’ve brought to me, ignoring the other project goals for the moment. For a feature like moves management, it might be the simplest possible template for moves they’d like to track.

Advantages of this approach

Uncover the real requirements

I’ve found that once I put something real in front of people, the whole conversation becomes a lot more productive, and users often change their mind about what they want! Then I’ve gotten to the real requirements, which might not have been clear to anyone involved. This can be a huge time and money saver for the organization, both because you do things once instead of several times, and because organizations often realize they don’t actually want something complicated and expensive they thought they wanted!

This approach also gives everyone time and space to get to the reason the requirements are the requirements. Particularly at organizations where a system has been dysfunctional for a while, people start to ask for things that no one should need. If you’ve got a 10 step process for sending out thank you notes and the requirements specify that step 7 needs to be fixed, we should really talk about why there’s a 10 step process, and probably throw it out and replace it with something radically simpler.

Build trust with your users

Particularly if I’m cleaning up a failed project, building trust is the first thing that has to happen. The people who failed might have been quite polite about it, but sometimes they’re real jerks about it, and the longer the organization has tolerated that, the harder it is to build trust. Putting one good thing in front of users is a good first step to building trust, but you’re not going to get their full participation until you’ve earned their trust.

Give your users a way to communicate with you

When those of us who design and setup systems ask users to collaborate with us, we’re asking them to think like us. If they could really do this, we wouldn’t be in the room! By putting something real in front of them and letting them kick the tires a bit, we’re giving them a way in.

Build one step at a time

Once you’ve got the first simple thing in front of users, you can start adding more simple things bit by bit. This lets people handle the change more comfortably, and lets you fix things before they’re sprawling messes.

Better user adoption

With this approach, we’re having users make more tiny changes rather than a few giant changes. I once worked with a Salesforce admin who had started her career as a social worker, and she reminded me that changing human behavior is HARD. Think about how annoyed you are when some tool you use daily changes the UI. Think about how hard it is to start going to the gym or quit smoking. That’s behavior change, and it isn’t that different from asking people to stop using a spreadsheet and start using a database!

Disadvantages

You start with a lot of unknowns

You can’t predict at the beginning how long the whole project will take or how much it will cost. This by itself usually makes it impossible for consulting companies to engage in it, because if they do, managing their people and payroll is nearly impossible.

Staffers have to be very open and involved

Staffers who are going to use this system have to be both open to this iterative process, and they have to have the time to commit to reviewing new features and providing feedback. I’d argue no approach can work if the staffers aren’t open and spending time on the project, but people sure do keep trying.

The provider has to practice empathy and listen a lot

I don’t personally consider this a disadvantage, but not everyone finds this kind of thing rewarding. Technology should serve humans, not the other way around. You can’t build something that serves people if you just assume you know what they want, rather than actually finding out.

Sometimes extra listening and empathy and time is required because the technologists who came before you were such jerks that the users can’t bring themselves to trust you.

The provider has to say no sometimes

When I was in high school and college I waitressed at a restaurant where we all wore buttons that said, “Yes is the answer, now what is the question?”

When you’re selling something, you want to say yes! But if you’re building something, you’ve got to stop selling it. Building trust doesn’t mean saying yes all the time. Sometimes users ask for things that aren’t going to work out the way they think it will. Sometimes the users’ bosses ask for things that aren’t going to work at all. If you’re really partnering with an organization to build a system to serve them, you’ve got to be willing to say, “based on my experience, I don’t think this thing you’re asking for is going to serve you well, and I think ultimately you’ll be sorry you paid me to do it.” This is uncomfortable, and most people would rather just build with the organization is asking for. Even more uncomfortable is to say, “that’s not work I’m willing to do, but I’m sure there are other folks out there who would be happy to do it.” Just like in the rest of our lives, having uncomfortable conversations is important, and more likely to result in good outcomes for everyone than saying things you don’t mean.