How to avoid AI DotCom II and the EU AI Act roadblocks to innovation

How to avoid AI DotCom II and the EU AI Act roadblocks to innovation

Well isn't the AI landscape a mess and, for that matter, what is left of free market capitalism - you remember, the innovation and competition that got us out of the caves.

The Big 7 have lost the faith of the banking cartel. Open AI is close to a cash call thanks to a falsely heralded burn rate that was last championed prior to the Dot Com crash. Now like then ludicrous PE and other crazy valuation metrics are symptomatic of the gold rush mentality, one, where those selling the picks and shovels (NVidia) make the money because they are selling the fools the tools. That 2002 crash saw $5tn or 78% of the NASDAQ wiped out, but, those who then moved in have made fortunes as the internet and hype free tech powered forward.

So AI is not the problem and C-level talent have both the ability and opportunity to choose which horse and technology to back. With liquidity draining from the global financial system, and the world's resources drying up, we need to invest urgently and wisely to avoid unnecessary cost to enterprise, people and planet. Then there is the legislation to contend with what threatens to put the Eurozone so far behind the US and BRICS that competitive advantage will be of such fantasy that even Sam Altman's seven trillion and product every minute roadmap will seem tame.

Central to everyone in Europe's thinking is the EU AI Act that came into force on August 1. This had forced Meta to announce that no new stuff was coming Europe's way. Including Llama-3.1 on released July 23, 2024 - some gobbledegook for the techies states three sizes: 8B, 70B, and 405B parameters.

Mindful that time is money, let's start with compliance and dive into how challenger enterprises can feel confident in deploying AI in the face of the drat drat and double drat EU AI Act - and the bursting Open AI bubble - through the strategic use of private small language models (SLMs and SLLMs):

1 Regulatory Compliance:

  • Reduced Risk Profile: Private SLLMs, being smaller and less complex, inherently pose a lower risk of unintended consequences or biases compared to larger models. This aligns them more closely with the EU AI Act's risk-based approach, making compliance easier.
  • Data Control: Challenger enterprises can maintain full control over their data by keeping it within their own infrastructure. This addresses concerns about data privacy and security, crucial aspects of the EU AI Act.

2 Cost-Efficiency and Flexibility:

  • Lower Computational Costs: Smaller LLMs require significantly less computational power to train and run. This is a major advantage for challenger enterprises with limited resources compared to the massive infrastructure needed for models like OpenAI's GPT series.
  • Customization: Private LLMs can be fine-tuned more easily to specific tasks or industries, leading to better performance and greater relevance to the enterprise's needs.

3 Differentiation and Competitive Edge:

  • Niche Specialization: By focusing on smaller, specialized models, challenger enterprises can cater to niche markets or industries that larger, general-purpose LLMs might overlook. This specialization can become a key differentiator.
  • Avoiding Homogenisation: Private LLMs allow enterprises to develop unique AI capabilities that are not simply replicating the features of OpenAI or other major players. This helps them stand out in a crowded market.

4 Building Trust and Transparency:

  • Explainability: Smaller, more interpretable LLMs can make it easier to explain how AI decisions are made, fostering trust with customers and stakeholders.
  • Bias Mitigation: The focused nature of private LLMs allows for more targeted efforts to identify and mitigate biases, aligning with ethical AI principles emphasized in the EU AI Act.

The CEOs Practical Steps:

  1. Identify Use Cases: Determine specific areas where AI can add value to your business and align with your risk tolerance.
  2. Select the Right Partner or LLM: Choose a private LLM or specialist like smartR AI that matches your computational resources, data requirements, and use case.
  3. Fine-tune and Customise: Invest in training and fine-tuning the model to your specific tasks and industry.
  4. Monitor and Iterate: Continuously monitor performance, gather feedback, and refine the model to ensure optimal results and compliance.

Early on in the whole privacy, compliance and AI hype cycle we at smartR AI realised the tech cartel biased regulation and legislation was coming along. Couple that with the inevitable AI bubble burst. You see, although we are just four years old as a company, our credibility as a group (3 IPOs, double figures in terms of exits and therefore over 150 years in experience track record and bruises you just can't buy off the shelf) blends grey heads with Scottish and Bay Area talent

With this and onshore only (never give a job, its about a purpose with a paycheck to truly unleash and enable your talent) central we knew there was another way. This is why our mainly US customers love us and our bigger competitors loath us. You see SCOTi AI is a he because he was a real person that lives on in the spirit of what AI can and must be. We knew as a challenger with true tech spirit at heart that Big tech was always going to buy up data once they got found out for exploiting what they see as the 'grey data area'.

So although smartR is classed as a start up we weave old heads with the brightest young talent Edinburgh has to offer with with that true tech spirit at heart. Because we solve customers problems we immediately realised that 70% of enterprise data isn't being used by companies (well not for themselves although Big Tech is certainly helping themselves to our knowledge), particularly in their analysis and supply chain thinking. So #SCOTi was designed as the answer. Isn't it time we thought smartR AI and realised our analysts and people can be liberated not replaced

In Conclusion

Companies seeking to harness the power of AI responsibly should consider the environmental advantages of smaller, private GPT-based language models. By focusing on targeted training, existing but optimised hardware, efficient water use, and overall process streamlining, these models offer a sustainable way to integrate AI into business practices.

One such challenger brand that is defying the odds and making friends is SCOTi? AI. Small but perfectly formed SCOTi? AI is your loyal companion by smartR AI. Our mankind's best friend is a private suite of scalable SLLMs, built within your ecosystem. So the only person mining your data is you.

1 Private - so the only one mining your data is you

2 SCOTi? AI is yours to own - that's right no on costs

3 Pre-trained first mover GPT SCOTi? AI will give you no behavioural issues.

4 Small Green and perfectly formed there is rarely a need for server upgrades, which makes SCOTi? AI the greenest GPT around.

5 A low low one off price with no energy bill surprises

Talk to the humans about a pre-trained SCOTi puppy - Why not DM Neil Gentleman-Hobbs (Head of Global Partnerships & BD) Oliver King-Smith (CEO) Greg James (US N & S America) and Steve Hansen (Aus & APAC)



要查看或添加评论,请登录

社区洞察

其他会员也浏览了