The Stash | October Edition

The Stash | October Edition

Welcome to October's edition of The Stash, our monthly newsletter where we share tools and tactics for teams putting Generative AI to work.


Driving In-house AI Innovation

What happens when you have thousands of enterprise employees experimenting with AI? onova 's CEO, Victor Li , joined our latest webinar to talk about how companies like McDonald's are transforming hackathon ideas into production-ready solutions in months. We focused on why AI is changing the innovation playbook:

  • Everyone's starting fresh - gen AI is new, and opportunity is everywhere
  • Experiments create immediate feedback loops - learning happens live
  • Non-technical teams can build meaningful solutions - no more technical barriers

Watch the full webinar on-demand here.


Recap: Prompt x Data View, A New Way to Work with LLMs

We shared our new Prompt x Data View interface update in a recent webinar focused on how teams can move beyond one-off prompting to build scalable, reusable AI workflows.

Watch through to see:

  • How to combine prompts and data for enterprise projects
  • A live demo of analyzing 800+ RFPs using multi-step AI workflows
  • A preview of "Threads," our new conversational interface – coming soon!



October in Montreal

"Préparer le terrain pour l'IA," an event hosted by SE Cloud Experts in collaboration with 谷歌 , brought industry leaders together for an evening of knowledge sharing on building AI-ready organizations. Stefan Lotz ( 谷歌 ), Sebastien Lamoureux and Stacy Véronneau ( SE Cloud Experts ), and Marceau Boulenger and Olivier Winter ( Lightspeed Commerce ) shared insights and active use cases. A great evening spent with peers and partners.

Later in the month, we were thrilled to join 谷歌 in sponsoring the inaugural AI Tinkerers meetup, bringing together over 100 AI engineers, entrepreneurs, and tech enthusiasts for hands-on demos. The diversity of applications - from marketing to government services - showcased the maturity Montreal's AI ecosystem. Stay tuned for more events like this!



The 2010s taught enterprises a crucial lesson: disparate data systems don't scale. Companies that built centralized data lakes and unified governance had a lasting advantage over those juggling disconnected strategies.?

We face the same strategic choice with AI. Today an adoption process might look like:

  • Data Science teams build custom model implementations
  • Marketing teams experiment with one set of models
  • Sales teams build similar but separate solutions
  • Support teams develop isolated knowledge bases
  • Product teams maintain their own prompt libraries

Each function optimizes for its needs, creating a new kind of fragmentation. Knowledge, costs, and security become increasingly difficult to manage - exactly the challenge that drove data centralization a decade ago.?

Centralized AI strategies are becoming what data lakes became in the early 2010s: critical infrastructure for scaling value. Here's why -

Single Source Access

  • One platform for all model providers?
  • Unified prompt and workflow library?
  • Shared best practices across teams, standardized way of working with AI at work

Cross-Enterprise Competence?

  • Learnings, projects, and solutions flow between departments?
  • Individual expertise becomes a company-wide advantage?
  • Insights compound over time, innovation spreads organically

Unified Control?

  • Centralized security and compliance?
  • Company-wide cost management?
  • Consistent governance standards, clear visibility across all AI use

Early adopters of centralized AI strategies are already seeing the rewards: faster innovation, lower costs, and exponential learning across teams. The winners in 2025 won't be the teams with the most AI tools, but those with unified access to models, knowledge, and workflows. The companies that invest in this foundation today will build the blueprint for this next phase of work.

要查看或添加评论,请登录

HumanFirst的更多文章

社区洞察

其他会员也浏览了