When AI Intuition Outpaces Reality: Lessons from an Agentforce for Nonprofit Community Sprint

December 19, 2025

“Does anyone know what this error message means?”

Standing on the 7th floor of the Salesforce Tower, I felt like I was having Deja Vu. I’m in San Francisco attending a Salesforce Open Source Commons Sprint. The first ever sprint organized around Agentforce for Nonprofits, a unique opportunity for nonprofit admins, practitioners, and SIs to come together for two solid days to create free, community-led solutions to unique AI problems. This is my first time attending a Commons Sprint, and I am immediately aware of the special vibe of the event. The energy in the air was electric; people were connecting with old friends and coworkers, there was an excitement to dive into problems and get creative to discuss solutions. But the further we got into the sprint, the more I was also becoming aware of a small but growing undercurrent of frustration.

After a strong start on day one, each group has identified a unique problem faced by nonprofits and challenged itself to create a solution with Agentforce. But it’s now halfway through day two, and I am hearing for probably the third time ,“I’m stuck, does anyone know what this error message means?” The excitement of the first day has slowly but surely faded to shared commiseration. This is harder than we thought.

As technology enthusiasts and Salesforce consultants, we are used to navigating the murky waters of a new product. It’s also true that most new product launches come with some degree of frustration as we attempt to decipher missing documentation and gaps between features promised and features delivered. But something about this frustration felt different. It’s almost as if people were exasperated? It wasn’t a frustration of, “I don’t know how this is supposed to work,” it was, “I know what is supposed to happen, why isn’t it?”

The AI Intuition Gap

The more I reflected on this, the more I think it points to a bigger issue. As a culture, we have developed a level of intuitive knowledge about AI that does not yet match our learned knowledge with this specific product (Agentforce). 

Intuitive knowledge is subconscious, it’s gut feelings and instant knowing from experience. It’s hard to put into words, but it’s deeply valid, and it informs the way we interact and perceive the world. Learned knowledge is the structured, conscious, logical understanding of information. It comes from teachers, from documentation, and also from experimentation and refinement.

For most of my nonprofit clients, when they migrate to a Salesforce product these two types of learning occur simultaneously. For most new clients, whether they are moving off spreadsheets or another CRM, Salesforce is a “net new” experience. What that means is they don’t know how Salesforce works, but they also, critically, don’t have any intuition about how it should work. As consultants, this allows us the opportunity to provide training and be responsive to confusion and play an active role in shaping both the learned knowledge and the intuitive knowledge.

But when I compared that experience with our experience at the Commons Sprint, it was different. These frustrations felt by myself and the attendees around me are actually an interesting sign that most people at the event had an intuitive feeling about how a chatbot, an LLM, or even “AI” in a broad sense should work. Unfortunately, our experience setting up these agents and AI features was not living up to our internal expectations. Even though the Agentforce product is new, AI in our lives is not. I think most of us arrived at the Commons Sprint already full of preconceived ideas, notions, and tolerances for working with AI. We had expectations about the product that we probably were not even aware of at the start.

When we apply these expectations to Agentforce, I think for many of us it felt like taking a step backwards. All other interactions with public LLMs are already ready to go. The models are trained, the UI is clean, and the experience is predictable. By comparison we now have to set up these agents manually? Why don’t these just work out of the box?

First is the setup and configuration. Each organization has different use cases, different goals, and different constraints. Each agent needs to be finely tuned to respect these use cases and protect your data across audiences. That takes a lot of forethought and planning, and the execution takes care. Compared to public models that offer a one-size-fits-all approach, these Salesforce agents require much more attention. 

Another reason is the data. Agentforce leverages the same public models we are used to working with, and while the underlying models available are extremely powerful, the Agents are limited by the data in your org. When we work with Gemini, Claude or ChatGPT we are working with a model that has been trained on millions of websites, books, articles, movies, and more. When we ask a Gemini a question, it has the entire web at its disposal to find answers. But our Agents need to be grounded in company data. The quantity and quality of that data, even in the most amazing, pristine organization, pale in comparison. 

Through the various configuration settings, the myriad places to add instructions and prompts, and the underlying data, there are SO many ways for an agent to go wrong. When we combine these gaps in technology with the gap in intuitive knowledge and learned knowledge, we have to recognize that this has the potential to cause major frustration for nonprofits. But that’s not the only problem we are facing. In addition to our growing intuitive knowledge, we are simultaneously lowering our threshold for failure with these tools.

Two to three years ago, when the first models were becoming available to the general public, we were all in a place of learning. What kinds of questions should I ask? What should I expect if I ask for an image to be created? Is this response normal? Why can’t it count the number of ‘r’s in Strawberry?

At the start, we knew it wasn’t going to be perfect, and we were ok with that! We understood that we were making a trade-off, speed for quality. For example, when we use AI in place of a google search to get a general sense of a topic or question. We know the answer might not be 100% correct, but we don’t care. We got 98% of the way there, and we did it in seconds. We were happy that we didn’t have to click through 8 different websites and summarize several unique viewpoints into a coherent summary. 

But as the months passed, our experience with the tools matured, and the models continued to improve at a blistering pace, we subconsciously decreased our threshold for inaccuracies, hallucinations, and em dashes. We still understand it’s not going to be perfect, but as each model improves upon the last, our threshold for failure gets smaller and smaller.

Addressing New Expectations

The takeaway here is that compared to standard Salesforce implementations, the expectations will be higher for Agentforce! I would argue the expectations will only continue to grow the more people adopt AI in their personal lives, and not to mention the cost to implement AI in their organization. I could write a second blog post on how we can react to that reality, but for now, here are three quick thoughts on how we, as SIs and consultants, can adjust to these higher expectations.

1. Counter High Expectations by Reducing Scope.

Challenge your organization to identify one clear problem and an AI solution that can solve it with 98% accuracy. Avoid the temptation to “AI-ify” everything. Remember, the public models are really good at a lot of tasks, but they also have much more data at their disposal. Our actions need to be thoughtfully constrained by the data available to us.

2. Fix your Data!

When configuring your own Agent in Salesforce, you have the option to choose which LLM model you want to use, but you can’t tweak the model itself. So start by fixing what is in your control. Which, for most organizations, is doing the boring, tedious work of cleaning up your data. I know this is clichéd advice, but every hour spent cleaning data will pay massive, unseen dividends in response quality.

Start Training Staff Before Releasing Agents into your Org

Because most staff will already have an intuitive knowledge of how AI should work, some organizations may be tempted to leverage this as a strength and “skip” the training. They might feel that these Salesforce Agents are so easy to use that they don’t need training. I think this would be a mistake. Instead, you can use training to increase your staff’s learned knowledge while also explicitly undoing false intuition. Rather than letting the staff attempt an action that doesn’t exist and then become frustrated, use training to clearly highlight: “This Agent does A, if you try to do B, it will not work.” Or, “this agent has access to your knowledge base and Opportunity records. It does not have access to our public-facing website.” By setting these expectations early, we can counteract the effects of our intuitive knowledge.

Conclusion

As we wrapped up our second day of the Sprint and shared report-backs, it became more and more clear to me that these types of challenges will only continue to grow as AI adoption grows. I felt incredibly grateful to have a safe, welcoming space organized by the Salesforce Community for the Salesforce Community to wrestle with these questions. As we continue to learn and implement more AI features, I am excited to go back for more community sprints. After all, these questions are not going away. AI is already here for many nonprofits, and the consultants and SIs need to be prepared to support not just the technological implementations and data cleansing. We need to find thoughtful, intentional methods to bridge the gap between intuitive knowledge and learned knowledge, and we will need to offer training that sets proper expectations. The challenges ahead will be hard, but this community offers me a lot of hope that we are prepared to tackle these problems together.


Profile picture

Written by Saul who lives and works in Boise, ID. You should connect with them on Linkedin.

Click here to read my AI Statement.