The "LLM" approach to AI in the real world
Srinivas Padmanabharao
AI Product Leader | Scaling Businesses | Building Teams
Last week I had written about the issue of trust in the AI.? Over the weekend, I had a chance to read the US Government - Department of Treasury, report on AI in Financial Services which was released recently.? The report makes for great reading in many ways as it talks about the opportunities (nothing too surprising in there for me) on the use of AI, and spends a great deal of content on the potential risks of AI along with some recommendations.? For context, the report is a compilation based on responses from over a 100 players in the industry to an RFI put out by the department in summer last year.? So it is a fair proxy for widespread industry opinion on this subject.
This morning, in the midst of the Deepseek influenced AI valuation meltdown, there have also been news of Deepseek being hacked and LLMs potentially leaking information to third-parties (seems ironic that LLMs which were built on pirated data themselves, now leaking…but that’s a discussion for another day).? There are a couple of reasons why, to me, the report and today’s events resonated.
Firstly, here are some excerpts from Section B1 on Data Privacy, Security and Standards.??
“Though third-party risk management and similar guidance could apply data standards to nonbank firms, some respondents also questioned this approach, saying that when data is transferred outside of financial firms for AI training or processing purposes, it may become more difficult for the financial firms themselves to enforce data security standanards……Several respondents called for changes to Gramm-Leech-Bliley Act(GLBA), including by moving from an opt-out standard – where financial institutions are allowed to share customer data unless the customer explicitly declines – to an opt-in. On the other hand, some respondents felt that the GLBA protections were sufficient and that additional enhancements to GLBA may negatively affect model development, particularly if data could not be used to train models or would need to be removed from a trained model. ….Another respondent argued that owners of proprietary data should be compensated when their data is used by a model”
Excerpts from Section B2 on Bias, Explainability, and Hallucinations.
“respondents noted the concern about hallucinations – a risk unique to Generative AI models - in which a model convincingly produces an incorrect output….improperly trained AI tools may reinforce or exacerbate bias…Another key concern noted by respondents is the difficulty in gaining greater transparency into how the model works, or the “explainability” of, AI models….Multiple respondents highlighted challenges with the transparency, explainability, and accountability of using AI in decision-making processes…Some respondents also highlighted that a key priority among both AI developers and their customers has been reducing the frequency of hallucinations for Generative AI technology … it is still challenging for AI models to pinpoint the source of the errors generating output hallucinations”
There's a lot of other good stuff in the report, but I think you get the general drift.
While the report focussed on the financial services sector, the same could be said for all other sectors with sensitive and proprietary data … healthcare, LifeSciences, Defence, etc. etc.? The biggest challenges with the use of AI solutions in the real world have less to do with “Is it useful?” and more with “How can I use it safely and securely, in a manner that doesn’t break the bank (pun intended)?”.
领英推荐
There is a real need for a cost effective, secure and private way to bring AI into the real world of business.? While Deepseek and the soon to come clones, derivatives and others will start pushing hard on the cost envelope of this challenge, there is still a broader need for solution approaches that are secure, private and explainable - by design.
The need of the hour is an approach to AI that is - by design?
Oh and yes, this morning I did a “told you so” to a couple of friends that I was right when I asked them to short NVDA back in Q3 last year…don’t know if they took the advice.? As for me, I don’t have the wherewithal to short any stock…so I went back to my day job ;-)
What do you foresee in 2025 for the world of AI adoption?