Simple, Responsible Ways to Start Using AI in Your Nonprofit
AI is a transformative tool that’s rapidly reshaping industries. With new players and products entering the conversation daily, it’s easy to feel overwhelmed. But don’t be intimidated — AI isn’t just for tech experts. Many AI tools are designed for everyday users, requiring little to no expertise, and they’re more accessible than ever.
“For nonprofits operating with limited resources, AI offers a powerful opportunity to make every donor dollar go further,” says Shannon Farley, co-founder of Fast Forward. “By leveraging AI tools responsibly, organizations can streamline operations and focus more energy on mission-critical work.”
Yesterday, Shannon spoke at Google.org’s first-ever Impact Summit, which brought together some of the top voices in social impact to discuss AI in the nonprofit sector. At the Impact Summit, Shannon was joined by Fast Forward’s Director of Programs, Addie Achan, and CareerVillage’s founder, Jared Chung. They explored the rapidly changing AI landscape and shared clear, practical steps all nonprofits can take to responsibly harness AI.
Their insights informed this guide, which will walk you through cost-effective, low-effort ways to integrate AI into your organization while maintaining ethical and responsible practices.
Demystifying AI: What is it?
Maybe you’ve read Co-Intelligence by Ethan Mollick. Maybe you’re an avid listener to the podcast Possible. Or perhaps you’ve dabbled in training neural networks. If so, you’re probably well-versed in AI, and you might want to skip this section. But for those who are just beginning to grasp AI and its capabilities, stick around.
As we’ve said, AI isn’t just for the tech elite. It’s for all of us. Let’s break it down.
Artificial Intelligence (AI): The capability of computers to perform tasks that typically require human intelligence. You’re likely already using AI if you’ve ever interacted with a chatbot, received personalized recommendations on Netflix, or used facial recognition to unlock your phone.
Machine Learning (ML): A subset of AI where computers learn from data to make predictions or decisions. A simple example is how your email filters spam by recognizing patterns from messages you usually mark as junk.
Generative AI: A type of AI that creates content, such as text, images, and sounds, based on patterns learned from existing data. This might be what comes to mind when you hear “AI.” Think Google Gemini or Dall-E.
Integrating AI Without the Overwhelm
Integrating AI into your organization’s workflow doesn’t have to be overwhelming. Instead of diving into complex AI projects like programming a holographic office assistant to water the plants, start simple. Consider using AI to automate routine tasks like scheduling social media posts or managing email responses.
Addie Achan, Fast Forward’s Director of Programs, leverages AI in her daily work. Addie also joined us at the Google.org Impact Summit, hosting one-on-ones with nonprofits to answer their burning AI questions. She knows her stuff. Here are some ways she helps all of us at Fast Forward use AI to drive efficiency and impact.
Addie and Kendall from the Fast Forward team led 1:1 AI coaching at Google.org’s Impact Hub.
Automate Repetitive Tasks
Addie: AI can be a powerful tool when it serves as a high-powered assistant. For example, we receive numerous inquiry emails from our prospective applicants. Many of those emails – and we, of course, love all of them – contain repetitive questions. We leverage AI to draft responses, saving significant time upfront. It’s crucial to review and personalize these drafts, but the initial automation streamlines the process. With one or two team members managing responses daily, this saves a lot of time. More time for actual impact!
Content Creation Assistance
Addie: AI can be a valuable tool for drafting content – like grant proposals. While much of the information we include in a grant proposal is repetitive, each funder has unique criteria: things like word count, formatting requirements, and special quirks. AI can compile our information into a draft, allowing us to customize the proposal. That’s just the beginning, though. We thoroughly review, edit, and add human magic to these drafts. The human touch is essential.
AI also assists with brainstorming. It can act as a thought partner, providing new insights and perspectives that we might not have considered. It can even help with inclusion by offering viewpoints outside of our regular mental models. While we may come up with five examples as a team, AI can evaluate our ideas, offering pros and cons to determine what fits best. Although AI isn’t the end-all-be-all and requires human refinement, it definitely increases the precipitation of any brainstorming session.
Gemini saves hours by turning a 500-word grant proposal into 100 words.
Data Analysis
Addie: AI can analyze both qualitative and quantitative data. It’s pretty smart. And it goes beyond merely identifying trends…it also provides actionable insights. For instance, consider Fast Forward’s decade-long history and 100+ alums.
Understanding alum needs can be a complex process. One way we gather their feedback is through censuses via Google Forms. Responses are compiled in a spreadsheet. An AI tool like Gemini can interpret the data. It identifies trends, highlights key points, and provides an overview of the results. It can even suggest programming ideas based on the feedback. However – and you know what I’m going to say – it’s important to note that AI’s suggestions are just a starting point. We need to refine these insights. Did I mention that the human touch matters?
Navigating Ethical Questions in AI for Nonprofits
Now that you understand the basics of AI, it’s crucial to consider the ethical questions that come with its use – particularly in the context of nonprofits. Below are key ethical considerations your organization should address when implementing AI.
Jared Chung, founder of CareerVillage.org, offers valuable insights on how to navigate these challenges. Just yesterday, Jared and Shannon were in conversation about how to fund AI in nonprofits responsibly. Jared is a social impact veteran and recently released a new AI product, Coach, which offers personalized, expert-backed career coaching for everyone. Jared has been laser-focused on how to apply AI ethically. Read on for his insights.
How can you ensure that AI doesn’t reinforce or amplify existing biases, especially when making decisions that impact vulnerable communities?
Jared: I’ve been inspired to see how quickly bias can be worked out of AI models. They’re actually quite malleable — much more malleable than humans in some ways.
The key is being able to monitor and identify the biases, and that’s where we’ve focused. We need to ask how we can make sure we’re monitoring the quality, consistency, and safety of what our AI applications are doing and use that to iterate.
Also, consider that the technology will only get better for the communities they’re built for if beneficiaries are a part of the feedback cycle based on their use of these products. The way out of the catch-22 is to put the technology in front of communities, be vocal with them about its limitations, and then shut our mouths, open our ears, and let them guide us.
Coach, CareerVillage’s AI-powered platform.
How does your organization stay transparent with stakeholders when using AI tools, particularly when AI-driven decisions might be difficult to explain or understand?
Jared: It’s not easy. The most important part is making sure that we have real staff talking to our stakeholders as much as possible. We also host a coalition of practitioners and stakeholders that meets periodically to share insights and ask questions.
Who is responsible when an AI system makes a mistake? How does your organization ensure accountability in AI-driven processes?
Jared: If you put an AI system in front of a beneficiary, you have to be responsible for what it does and how it behaves. But I’ve found that the best allies in this are the beneficiaries themselves. Be transparent with them about the limitations of the tool. Talk to them about how AI works. Make sure there’s sufficient AI literacy before they start using it. That can greatly reduce the impact of mistakes when they do occur. It can also help make sure that, along with accountability, there’s thoughtfulness in tipping the balance dramatically so that the benefits of the technology can vastly outweigh the downside of AI mistakes.
Curious to find out more? At the Summit, Google.org launched Get Time Back, which connects nonprofits with free AI resources specifically tailored to their needs.
As you embark on building more AI into your nonprofit’s operations, keep in mind that by starting small, you can make a big impact on your efficiency. All the while, you can do so ethically and responsibly. Start simple, stay human, and let AI amplify your impact.