I'm adding a quick shout-out to all the folks at
谷歌
last Thursday who hosted the #CFD20 delegates at the Moffett Place center. They were such excellent hosts and presenters, and they offered our #TechFieldDay team an inside look at what the future of #GenerativeAI holds for anyone using their Google Cloud technology in the coming months and years.
I promise to dive in much deeper to the Google technology stack in an extensive blog post a bit later this week, but for now, here are some key takeaways while they're fresh in mind:
- Google is building impressive technological underlayment for their generative AI tools, starting at the network, storage, and security levels, to insure when GenAI really takes off - and yeah, based on what we saw, they are just getting started - the massive computing requirements are already in place and ready for expansion.
- We've all heard how CPUs and GPUs are crucial for improving the velocity of AI model training, testing, inference, and prompt generation, but Google actually has a third choice: TPUs, short for Tensor Processing Unit (see below). I admit full ignorance about this technology until they spoke about it in detail during our visit. (Yes, it does indeed pay to leave one's tech bubble from time to time.)
- Finally, their Gemini generative AI tool is built to take full advantage of this entire stack. From what their team showed us, it's a full multi-modal toolset that can use just about any input - text, code, images, live photography, even sketched drawings - to perform context-sensitive searches and return impressive results with extremely little coding.
Again, more coming soon! Check out my blog in a few more days for a more extensive look into Vertex AI and its technological underpinnings.
Principal Software Engineer | Team Lead | Ex-Oracle
5 个月Interesting!