Monthly Tech Bites #39 | Entering the era of ethical AI
Hey there, it’s Jerzy – Chief AI Officer at Miquido, and you’re reading your monthly dose of AI and business news! ??
This month, AI is having a moment – again. But this time, it’s not just about what AI can do. It’s about whether we can actually trust it. Creativity? Check. Efficiency? Check. But business-grade reliability? That’s where things used to get tricky. Copyright battles, ethical risks, PR disasters caused by AI hallucinations – up until now, trust was the last great frontier for AI adoption at scale.
This month, we’re seeing major players making moves to solve that. Adobe is doubling down on commercial-grade AI with Firefly, ensuring businesses can create AI-generated content without waking up to a lawsuit. OpenAI is taking a stand against bad actors, and its new deal with The Guardian aims to inject some much-needed credibility into ChatGPT’s 300 million-strong user base.
Meanwhile, Elon Musk’s xAI has released Grok-3, a model promising better reasoning and fewer hallucinations. However, its semi-open nature raises key transparency concerns. In a world where AI can be as easily weaponized as it can be used for good, this is a challenge we can’t afford to overlook.
In this issue, we’ll dive into:
?? Adobe Firefly: The First Truly Business-Safe AI Video Generator
?? OpenAI and The Guardian: Can AI Combat Misinformation?
?? Embracing AI Without Losing the Human Touch
?? Grok-3: The Secret Future of AI?
Let’s jump in.
Adobe Firefly: The First Truly Business-Safe AI Video Generator
At Miquido, we work with a lot of well-established companies, and when it comes to AI, one thing keeps coming up. AI isn’t just about what it can do – it’s about whether it’s legally safe to use.
That’s where Adobe’s latest Firefly update stands out. Billed as the first commercially safe generative AI for video, Firefly is trained only on licensed and rights-cleared content. In other words, businesses can finally create AI-generated media without worrying about legal headaches.
Until now, AI-generated content has been stuck in a legal gray area. Adobe is betting that companies will pay extra for AI they can actually trust. And with big names like Deloitte, IBM, and PepsiCo already on board, the message is clear – responsible, ethical AI can become a standard tool for business.
OpenAI and The Guardian: Can AI Combat Misinformation?
Generative AI and journalism have had a rocky relationship (just ask The New York Times or Tribune Publishing). Now, OpenAI’s partnership with The Guardian Media Group is aiming to change that. ChatGPT users will get direct access to verified news, reducing reliance on unreliable sources.
OpenAI says this is part of a broader effort to prevent AI from spreading misinformation. But it also raises a big question – who gets to decide what counts as “trusted news”? And will other publishers get on board as well?
Embracing AI Without Losing The Human Touch
AI is reshaping how we work, create, and connect – but how can businesses leverage its power while staying true to their values and vision?
In a new interview on my YouTube channel, I sat down with Julia Matuszewska, a Prompt Designer and AI Growth Consultant at Miquido, to explore the evolving role of generative AI in business and everyday life.
Key Takeaways:
?? Prompt design is evolving. Once a niche skill, it's now a basic competency – like using Google Docs. Instead of hiring “prompt engineers,” companies expect employees across departments – marketers, developers, strategists – to integrate AI into their roles.
?? AI can amplify creativity – but also flatten it. Over-relying on AI risks turning creativity into a predictable, standardized process. Julia warns of “McDonaldized” content – efficient but lacking depth, originality, and human emotion.
?? Are we outsourcing too much thinking? AI-generated content is growing, but at what cost? Experts worry about linguistic homogenization, information overload, and “model collapse,” where AI regurgitates AI-generated data, diluting originality.
Businesses need to establish ethical guidelines to ensure AI enhances – not replaces – human engagement. The key? Use AI strategically, but keep humanity at the center.
Grok-3: The Secret Future of AI?
Elon Musk’s AI startup xAI just dropped Grok-3, a powerful new language model with 10 times the compute power of its predecessor. xAI claims it beats OpenAI’s GPT-4o, Google Gemini, and DeepSeek-V3 in math, science, and coding. But while the numbers sound impressive, the bigger story is about trust and transparency.
What’s the catch? We don’t know exactly how Grok-3 was built. Unlike the open-source Grok-1, xAI hasn’t shared details on how it developed its reasoning abilities or whether it used open models like DeepSeek-R1.
Musk says Grok-3 will improve daily and get voice capabilities soon, but businesses may hesitate. Many companies prefer clear, well-documented AI models to avoid legal risks. xAI’s approach could make adoption tricky.
?? Innovation Waves #3: FinTech’s Impact on the Music Industry
The music industry is evolving, but traditional revenue models are struggling. Rising touring costs, declining sync values, and opaque royalty systems make it harder than ever for artists and industry stakeholders to thrive. How can FinTech change the game?
Join us on March 19th as we dive into the financial innovations reshaping music. This session will explore how technology is driving transparency, streamlining payments, and creating sustainable revenue models for artists, platforms, and labels.
?? What you’ll learn: ? How FinTech is revolutionizing royalty payments and tackling trust issues ? The role of AI in music monetization and funding models ? How automation is unlocking new business opportunities in music
With real-world insights and expert speakers, this webinar is your gateway to understanding the future of music finance.
Secure your spot now ?? https://bit.ly/4blLZG9.
Are we entering the era of reliable AI?
As AI becomes deeply integrated into business operations, one thing is clear: trust, ethics, and reliability must be at its core. From ensuring compliance to mitigating risks and choosing the right frameworks, enterprises need to make informed decisions to build AI strategies that are both effective and responsible.
Until next month, stay ahead of the curve!
Jerzy Biernacki
Chief AI Officer at Miquido
Global Corporate Finance Specialist | Structuring Syndicated Loans & Debt Solutions | MD @Monei Matters | Connecting Businesses with Capital
1 周The deal between OpenAI and The Guardian sounds promising because it ensures that journalists are credited and compensated for their work. However, this also brings up bigger questions—will AI-generated summaries replace actual reporting? And how do we prevent AI from overshadowing human journalists? AI should support journalism, not take away the need for real investigative reporting.
Digital Marketing | SEO Specialist | Social Media Management | Digital Campaign specialist
1 周I was really excited when I heard Adobe was bringing AI video generation to Premiere Pro, but the reality is disappointing. Right now, Firefly only generates clips that are a maximum of five seconds long—what am I supposed to do with that? AI video tools should be more flexible, not just gimmicks for quick experiments. I hope Adobe expands its capabilities soon because, as it stands, it’s not a serious tool for professionals
Driving Innovation and Transforming Enterprises | Technology Leadership | Generative AI Architect | Architectural Expertise | Strategic Visionary | Technical Delivery Excellence | USAF Veteran
1 周Elon Musk’s xAI is claiming Grok 3 is better than GPT-4o and DeepSeek-V3, but is that just marketing hype? Benchmarks seem to show it has strengths in math and reasoning, but real-world results vary. I feel like we need more transparency on how these models are trained so we can actually compare them fairly instead of just relying on corporate claims.
Empowering Brands with Passion: PWD & Divyang Advocate | Seasoned Sales & Marketing Pro | Digital Marketing Maven | PR Enthusiast | Strategic Content Architect | Insightful Business Analyst | MPA & B.Tech Holder
1 周Grok 3’s uncensored approach is a double-edged sword. On one hand, it promotes free speech, which is great. On the other hand, there’s a real risk of it spreading misinformation or harmful content. How do we strike a balance between openness and responsibility? AI companies need to be very careful about this because the wrong approach could lead to major backlash.