I recently attended the
BOI (Board of Innovation)
Autonomous Innovation Conference. It was a rich and insightful look into how AI is transforming the innovation landscape and providing new tools and approaches to disciplined innovation. Take some time to read my 5 insights for nonprofit leaders.
(NOTE: To understand the items below it is important that you understand what an AI Agent is. Here is an extremely simple, non-technical and short video to get you comfortable with the next stage of AI technology.)
Thought Leader:
Leo Velásquez
- What is it? Social Listening utilizes AI Agents to listen to your customers/users via social media.
- How does it work? You design an AI Agent to review all the videos, images, text (posts and comments) related to your product and then you utilize agents who specialize in various aspects of the data to analyze intent, engagement, pain points, etc. Then the agents can cluster your social data into personas as well as other groupings based on the analysis.
- Implications for Nonprofits: For those nonprofits that have built a strong social media presence and consistent audience engagement, we can use AI Agents to listen and analyze people’s input and then take action as it relates to our efforts.
Thought Leader:
Amir Ouki
- What is it? An AI Simulation engine is a “A virtual model of a complex real-world system that analyzes, predicts and optimizes outcomes before you take action.” This means you can simulate countless outcomes based on data and parameters you set as a way of making decisions about how to move forward with product development or as you use a data product.
- How does it work? The diagram below is very helpful. Essentially you build an AI Simulation Model based on your custom data and then run scenarios to make decisions. It can be used internally in product development or strategic planning but it can also be used as a way of supporting decision making for customers/users of a data product.
- Implications for Nonprofits: While it can be very valuable to simulate scenarios related to our programs on the field or donor engagements, we are unlikely to have enough data to leverage it the way major corporations might. One likely use case is sharing data across multiple organizations to get insight for everyone.
Thought Leader:
Jo McKinney
- What is it? We are entering a time when we will have to be skilled at marketing to both humans and AI Agents. The reason is that AI Agents will be doing much of the decision making for humans and so we will have to know how to “persuade” or “influence” the agents to utilize our products, services, or programs.
- How does it work? It’s the AI version of SEO. Instead of influencing the search engine as it gets optimized, you are influencing the AI Agent as it learns to support the human it works for. More and more AI Agents will anticipate human needs, nudge them with AI-generated options and then learn from how the human responds. They are calling this the Perpetual Loop.
- Implications for Nonprofits: We have to understand the ways that humans will use AI Agents to manage tasks and make decisions. Then we need to optimize our efforts around recruiting, fundraising and program engagement so that Agents helping our audiences to do those tasks and make those decisions will consider our solution for presentation to the humans they support. While this sounds like science fiction, major corporations are already building out these capabilities and so they will trickle down to nonprofits sooner than we think.
- What is it? An LLM is a Large Language Model. This allows you to use natural language to work with an artificial intelligence. An LAM is a Large Action Model which allows you to train AI Agents to do the work you need. The speaker focused on very small LAM’s that can reside on a cell phone and support field workers.
- How does it work? You train an AI Model with the ability to select the right tool for the job and then use that tool to accomplish the job. This means you can do much more than simply ask it questions like “How many appointments do I have this afternoon?” Now you can say, “Reach out to these three people and schedule them for meetings in between the appointments that I have this afternoon and send them this proposal once they have accepted the meeting invite.” The key to these models being small and mobile is that they are multimodal; able to take audio, video and text inputs from field workers on the move and manage all the actions needed to follow-up with what the human is doing as they are engaging with partners, donors or field activities.
- Implications for nonprofits: There are many applications for how LAM’s might support field workers and staff that are on the move. This could include helping with administrative functions like reservations and expense reports. But it could also include itinerary coordination, research on locations, summarizing of meetings for reports and analysis of opportunities on the ground.
Thought Leader:
Dominik Heinrich
- What is it? This is a move from design thinking to thinking design. Essentially an iterative process where design is constantly being modified as humans use AI to generate new outputs, test them and then reimagine the next iteration. They are talking about it as “adaptive and omni modality.” This means they are always changing and adapting to our needs across all modalities (text, audio, video, touch, etc.).
- How does it work? The two key concepts that were shared were “Aconic” and the “Infinite Design Method” The diagrams below describe them better than I can.
- Implications for nonprofits: Program, product and graphic design is going to move to an increasingly conversational, iterative, and dynamic posture. We will be constantly imagining, testing, creating, getting feedback and reimagining. All with AI as a helper in that process. So gone are the days where we do a redesign of the website and then keep it for a given amount of time. The online product will always be changing. That brings up a lot of questions and issues about quality control, mission drift and operational control.