Is It Possible (and Safe) to Build Your Own AI Without Tech Expertise?
Przemek Majewski
Living with Diabetes | AI Strategist | DLabs.AI CEO | Ex-CERN
Welcome to this sweltering April day (well, here in Poland at least)! ??
Today's edition of our newsletter promises to be just as heated as we dive into a provocative topic: Can individuals without technical background safely and effectively create AI applications, especially in critical fields like healthcare where accuracy is non-negotiable?
We'll also explore the reassuring trend of more companies turning to consultancy services. This shift acknowledges the limitations of in-house teams and enhances the creation of reliable, risk-mitigated AI solutions.
Additionally, we'll cover some exciting developments in the world of Large Language Models (LLMs) and the groundbreaking partnership between OpenAI and Moderna. Plus, don’t miss a unique video featuring an interview where the interviewer is none other than the interviewee themselves!
And there’s more—for those who persevere to the end, I have a free downloadable PDF that can help you build AI products tailored to real user needs.
Let’s start this journey together!
Myth Boosted by Widely Available GenAI Apps: Can Anyone Create Safe AI?
Today, I want to shed light on a growing trend that's reshaping perceptions around artificial intelligence. This trend, largely propelled by the accessibility of platforms like ChatGPT, is empowering users to feel like they can “create AI themselves.”
Let me share an instance from a recent LinkedIn conversation. A while back, my team and I developed SugarAssist, a GPT-based chatbot tailored to enhance diabetes management and awareness comprehensively. This tool offers personalized assistance with blood sugar monitoring, medication management, dietary impacts, physical activity, and provides emotional support for individuals newly diagnosed with diabetes and their families.
The creation of SugarAssist was far from trivial. It involved not just AI experts and developers but also doctors and people living with diabetes who contributed insights to make the application both accurate and empathetic. We grounded SugarAssist in data meticulously compiled from an ebook we developed over several months, ensuring the chatbot's advice was both reliable and actionable.
Recently, I reached out to several diabetes educators on LinkedIn, suggesting they test SugarAssist with their patients to evaluate its utility and gather feedback for further enhancements. One of the replies I received was startling: “No thanks, I'll build something similar myself for my needs.”
Well, this response starkly highlights a common misconception in AI application, especially in sensitive areas like healthcare. The assumption that effective and safe AI solutions can be independently developed without a deep understanding of AI's complexities is not just surprising; it's a wake-up call.
Developing robust AI solutions like SugarAssist extends beyond mere coding skills. It requires a profound grasp of AI's intricacies, especially the critical importance of data privacy and precise information delivery. The "do-it-yourself" mentality, particularly prevalent among non-tech professionals, dangerously simplifies the monumental challenges involved in creating AI tools that are safe, reliable, and genuinely beneficial.
Here’s a brief look at the process we undertook to ensure SugarAssist was up to the task:
For those interested in delving deeper into the creation of SugarAssist, check out this detailed article .
So, have any of you encountered a similar approach? I’d love to hear more examples, so please share your experiences in the comments.
Phi-3 Mini: Microsoft's Answer to Cost-Effective, Efficient AI Solutions
The past month has been a whirlwind in the world of Large Language Models (LLMs), with a rapid succession of releases that could make anyone's head spin. We've seen the debut of Gemini Pro 1.5, Grok 1.5, and Llama 3, but it's Microsoft's latest release, Phi 3, that's capturing the most attention.
Phi 3 represents a significant leap forward in the realm of small AI models. In essence, these smaller models are engineered to be faster, cheaper, and more efficient than their larger counterparts, like the formidable GPT-4. They excel in handling simpler tasks, such as text summarization, which makes them particularly appealing for a wide range of applications.
Microsoft's launch of Phi-3 Mini marks the first in a trio of smaller models the company plans to roll out. With 3.8 billion parameters, Phi-3 Mini is trained on a relatively modest data set, yet its performance rivals that of much larger models. Available now on platforms like Azure, Hugging Face, and Ollama, Phi-3 Mini is just a precursor to the forthcoming Phi-3 Small and Phi-3 Medium, promising 7 billion and 14 billion parameters, respectively.
Eric Boyd, corporate vice president of Microsoft Azure AI Platform, has remarked that despite its compact size, Phi-3 Mini rivals the capabilities of larger LLMs such as GPT-3.5. This efficiency is not just a technical achievement; it also means lower operational costs and better performance on personal devices like smartphones and laptops.
What sets Microsoft's approach apart is their innovative training method, which Boyd likens to the learning process of children—from bedtime stories to books with simple language structures. Microsoft even created custom 'children's books' using an LLM to teach Phi, adapting complex topics into simpler, digestible content.
While Phi-3 has its limitations compared to juggernauts like GPT-4, especially in terms of general knowledge breadth, its precision in coding and reasoning tasks is remarkable. For many companies, especially those with smaller internal data sets, Phi-3 models offer a more cost-effective solution without sacrificing performance.
As Microsoft continues to innovate with the Phi-3, it's evident that the landscape of LLMs is rapidly evolving. Smaller, more specialized models like the Phi-3 are transforming from mere alternatives to essential tools that provide unique advantages for specific applications.
The decision to choose the right AI model isn’t always about chasing the newest technology. For instance, at DLabs.AI , while developing an AI agent for students, we chose GPT-3.5 Turbo over the available GPT-4. This decision was driven by the specific needs of our project: GPT-3.5 Turbo offered a larger token limit, competitive pricing, and a level of response accuracy that was perfectly aligned with our requirements. This strategic choice allowed us to boost performance and reduce costs while maintaining high-quality outputs.
This underscores a vital lesson: spending more doesn’t necessarily mean better results. The emergence of new LLM models provides us with a wider array of choices, enabling more tailored solutions without unnecessary expenditure. In my view, the evolution of LLMs is not just about technological advancement but about smartly integrating these tools to maximize efficiency and effectiveness.
Source: Microsoft
Digital Immortality? Reid Hoffman's AI Twin Explores New Frontiers
Have you ever wondered what it would be like to have a conversation with... yourself? Reid Hoffman, the co-founder of LinkedIn, turned this thought experiment into reality with the introduction of "REID AI," a digital twin that not only looks and talks like him but also mirrors his gestures and expressions. Created by Hour One, this digital replica is a culmination of decades' worth of Hoffman's books, speeches, and podcasts.
领英推荐
While there are subtle quirks—occasional odd hand gestures, an overuse of buzzwords—that betray its artificial nature, the overall effect is remarkably lifelike. Don't just take my word for it, see this astounding interview for yourself:?
Imagine the possibilities with REID AI: from generating customized versions of Hoffman's speeches for diverse audiences to engaging in a lifelike Zoom call for startup advice or LinkedIn optimization tips.
However, as exciting as these possibilities are, they come with their own set of risks. The rise of such advanced AI replicas brings concerns about privacy and the potential for deepfake misuse. Hoffman himself emphasizes the importance of establishing robust guidelines to protect individuals' identities and reputations.
Moreover, this technology stirs profound questions about identity and legacy. Could AI replicas keep our personas 'alive' long after we're gone? The potential and the ethical considerations are vast, marking a significant step into a future where our digital selves might outlive us.
Consultancy Costs Outpace Internal Staffing: A New Trend in IT Expenditure
Now, let's dive into the financials. Gartner, Inc.'s latest forecast paints a vivid picture of the IT landscape, projecting global IT expenditures to hit $5.06 trillion in 2024 —an impressive 8% increase from the previous year. This surge points to a robust trajectory that could see IT spending surpass the $8 trillion mark well before the decade closes.
A particularly intriguing shift in this financial landscape is the rise in consultancy spending, which, for the first time, is outpacing internal staffing costs. John-David Lovelock, a vice president at Gartner, highlights a significant industry challenge: enterprises are increasingly unable to match the IT talent that service providers attract. This talent gap is nudging companies more and more towards external consultants.
This trend transcends mere staffing solutions; it's about tapping into a wellspring of expertise that can spearhead transformative projects, especially in cutting-edge fields like Generative AI (GenAI).
At DLabs.AI , we've noted a growing awareness among companies about the complexities involved in mastering AI. This realization has led to an increased demand for external professional guidance that melds deep technical knowledge with strategic business insights. Companies are seeking to augment their internal capabilities with expert advice to fully leverage the power of AI technologies.
From my perspective, this shift is a beneficial development. Tackling projects internally without the requisite expertise can lead to inefficiencies and unnecessary expenses. By collaborating with seasoned experts, companies can not only accelerate their project timelines but also improve their outcomes, thereby gaining a competitive advantage in today's fast-paced market.
Source: Gartner
Transforming Biotech: Moderna and OpenAI Pave the Way with AI-Driven Solutions
Remember Moderna, the biotech powerhouse behind one of the leading COVID-19 vaccines? Moderna has now teamed up with OpenAI to propel the development of its next-gen mRNA medicines, aiming to boost both the innovation and productivity across its operations.
A recent YouTube video by OpenAI showcases this dynamic partnership in action, highlighting how Moderna is leveraging ChatGPT Enterprise to revolutionize its approach to medical research and corporate efficiency.
Key takeaways from the partnership include:
Among these, several standout applications are reshaping how Moderna operates:
By embedding AI deeply into their processes, Moderna isn’t merely keeping pace with technological trends—they are setting new benchmarks in the pharmaceutical industry. It's a powerful testament to how AI can extend beyond traditional tech boundaries to enhance human health and global well-being significantly. This is not just about cutting costs or speeding processes; it's about leveraging AI to save and improve lives on a massive scale.
Source: OpenAI
Ensure Your AI Hits the Mark: Start with the Right User Insight
Last but not least, I want to leave you with a tool that could streamline your approach to launching new AI products: the "Checklist of User Insights Before Developing AI Products." Amidst the rush of AI development, it's crucial not to lose sight of the core element of any successful product—its users.
This checklist is a vital resource, comprising 17 essential questions that should be addressed before you dive into the development phase. These inquiries cover everything from user interface preferences to business models, ensuring that your product is not only technologically sound but also finely tuned to meet real-world needs and expectations.
Remember, the key to innovation isn’t just in the coding; it’s in understanding and prioritizing the people who will use your product. If you're interested in aligning your next project with the genuine demands of your target audience, you can download a PDF copy of the checklist here .
This simple step could be what sets your next AI initiative on the path to success.
—-
This is the end of our April edition!
I hope it has enriched you with new knowledge and perhaps even inspired you to delve deeper. I wish you all a great day, and to my Polish readers, a wonderful long May weekend!
Absolutely, creating AI solutions demands functionality and security. As Aristotle posited, excellence is a habit. A robust foundation ensures innovation thrives. ????