The "LLM" approach to AI in the real world

The "LLM" approach to AI in the real world

Last week I had written about the issue of trust in the AI.? Over the weekend, I had a chance to read the US Government - Department of Treasury, report on AI in Financial Services which was released recently.? The report makes for great reading in many ways as it talks about the opportunities (nothing too surprising in there for me) on the use of AI, and spends a great deal of content on the potential risks of AI along with some recommendations.? For context, the report is a compilation based on responses from over a 100 players in the industry to an RFI put out by the department in summer last year.? So it is a fair proxy for widespread industry opinion on this subject.

This morning, in the midst of the Deepseek influenced AI valuation meltdown, there have also been news of Deepseek being hacked and LLMs potentially leaking information to third-parties (seems ironic that LLMs which were built on pirated data themselves, now leaking…but that’s a discussion for another day).? There are a couple of reasons why, to me, the report and today’s events resonated.

Firstly, here are some excerpts from Section B1 on Data Privacy, Security and Standards.??

“Though third-party risk management and similar guidance could apply data standards to nonbank firms, some respondents also questioned this approach, saying that when data is transferred outside of financial firms for AI training or processing purposes, it may become more difficult for the financial firms themselves to enforce data security standanards……Several respondents called for changes to Gramm-Leech-Bliley Act(GLBA), including by moving from an opt-out standard – where financial institutions are allowed to share customer data unless the customer explicitly declines – to an opt-in. On the other hand, some respondents felt that the GLBA protections were sufficient and that additional enhancements to GLBA may negatively affect model development, particularly if data could not be used to train models or would need to be removed from a trained model. ….Another respondent argued that owners of proprietary data should be compensated when their data is used by a model”

Excerpts from Section B2 on Bias, Explainability, and Hallucinations.

respondents noted the concern about hallucinations – a risk unique to Generative AI models - in which a model convincingly produces an incorrect output….improperly trained AI tools may reinforce or exacerbate bias…Another key concern noted by respondents is the difficulty in gaining greater transparency into how the model works, or the “explainability” of, AI models….Multiple respondents highlighted challenges with the transparency, explainability, and accountability of using AI in decision-making processes…Some respondents also highlighted that a key priority among both AI developers and their customers has been reducing the frequency of hallucinations for Generative AI technology … it is still challenging for AI models to pinpoint the source of the errors generating output hallucinations”

There's a lot of other good stuff in the report, but I think you get the general drift.

While the report focussed on the financial services sector, the same could be said for all other sectors with sensitive and proprietary data … healthcare, LifeSciences, Defence, etc. etc.? The biggest challenges with the use of AI solutions in the real world have less to do with “Is it useful?” and more with “How can I use it safely and securely, in a manner that doesn’t break the bank (pun intended)?”.

There is a real need for a cost effective, secure and private way to bring AI into the real world of business.? While Deepseek and the soon to come clones, derivatives and others will start pushing hard on the cost envelope of this challenge, there is still a broader need for solution approaches that are secure, private and explainable - by design.

The need of the hour is an approach to AI that is - by design?

  • Traceable and explainable … a rethink on the use of trillion parameter LLM blackboxes?

  • Secure and private …? allow each users to control / monetize their data? (Right to be forgotten?) while mixing private and public data confidently?

  • Budget friendly … Eliminate data hoarding & large scale training, inference and operational costs? (Note that I think the cost of LLM usage itself may drop to the search cost levels through subsidization - essentially free, but the total cost of ownership of an AI will remain high without changes in our approach)

Oh and yes, this morning I did a “told you so” to a couple of friends that I was right when I asked them to short NVDA back in Q3 last year…don’t know if they took the advice.? As for me, I don’t have the wherewithal to short any stock…so I went back to my day job ;-)

What do you foresee in 2025 for the world of AI adoption?

要查看或添加评论,请登录

Srinivas Padmanabharao的更多文章

  • 2025 : The trough of disappointment… or an opportunity for a new approach to AI?

    2025 : The trough of disappointment… or an opportunity for a new approach to AI?

    The Edelman barometer of trust report always makes for good reading. The 2025 edition is no different.

    3 条评论
  • Happy New Year

    Happy New Year

    Yes, I know that it is only Nov 29th… but then have you forgotten that we will be entering the 3rd year of the AC era…

    3 条评论
  • Do you remember what you saw at SITC?

    Do you remember what you saw at SITC?

    Thanks to Society for Immunotherapy of Cancer (SITC) for a great #SITC2024 . There were over 1400 posters and equally…

  • Got a booth? get Knolens Edge !!!

    Got a booth? get Knolens Edge !!!

    As an exhibitor attending Society for Immunotherapy of Cancer (SITC) or sponsoring a booth at #sitc2024 in Houston this…

  • Welcome to SITC 2024, Houston

    Welcome to SITC 2024, Houston

    You have a camera on your mobile phone. Are you ready to use it to drive greater value from your investment of going to…

  • Revolutionizing Systematic Literature Reviews

    Revolutionizing Systematic Literature Reviews

    In the pharmaceutical industry, systematic literature reviews (SLRs) are indispensable for gathering and analyzing…

  • AI : The bottom line challenge

    AI : The bottom line challenge

    It's been nearly two years since ChatGPT burst onto the scene, ushering in a new era of artificial intelligence (AI)…

  • Lets beat infectious diseases !!!

    Lets beat infectious diseases !!!

    Let's wind back the clock. In 1978, CDC set a goal to eliminate measles from the United States by 1982.

    3 条评论
  • Instant Insights at your fingertips

    Instant Insights at your fingertips

    Instant Insights for the biomedical professional : AI at your fingertips Imagine you’re at an important industry…

  • Analytics in the age of LLMs

    Analytics in the age of LLMs

    Once upon a time, most of our communication used to be via spoken voice (the telephone, voicemail or face to face) and…

社区洞察

其他会员也浏览了