Large Language Models and Networking: What are the Challenges and Opportunities?

Large Language Models and Networking: What are the Challenges and Opportunities?

It was an honor to represent 瞻博网络 at Networking Channel (https://networkingchannel.eu) forum discussion on LLMs in Computer Networking industry. The forum was moderated by noted Networking expert Jim Kurose of University of Massachusetts Amherst and attended by several hundred people across the world. The panelist included ThirdAI co-founder Shrivastava Anshumali , Prof. Jon Crowcroft of 英国剑桥大学 , Elisa Bertino of 美国普渡大学 , Prof. Victor O. K. Li of 香港大学 and coordinated by Maglavera Stavroula .

Anshu very well summarized the current best practices based on his academic and industry experience:

  • Manage expectations well
  • Finding the right use cases
  • Understanding data privacy and data residency
  • Careful with “essentially free” services in production
  • Putting LLM in production is very different and potentially much harder than making a demo

Prof. Bertoni described innovations that are at the interface of both LLM and Networking industry such as SPEC5G — a dataset of NL sentences specific to 5G. She also highlighted a potential security risk if Gen-AI coding assistants are not used with careful oversight. Prof. Crowcroft challenged the LLM community with four observations:

  • Not affordable and not sustainable
  • Not Explainable
  • Many alternate techniques such as causal analysis and RCA — explore them first
  • Peak of hype cycle.

Prof. Li also touched upon his work in Causality in LLM space.

My talk highlighted the comprehensive Gen-AI framework at Juniper that balances data and legal risks with potential business benefits. This framework intrinsically addresses the key observations by Anshu. Amongst the many use cases, in my opinion Gen-AI coding assistants are likely to be the most successful applications. Towards the risks and challenges identified by other panelists, Juniper framework emphasizes a ‘human-in-the-loop’ for many current Gen-AI applications. For example, the coding assistants may suggest code with security risks or copyright issues, but there need to be guard-rails that minimize these risks. Furthermore, these risks are not unique to coding assistants — human developers introduce them at all the time. There is no need to throw out the baby with the bath water. I am certainly with Prof Crowcroft on the non-explainability of LLMs. The explainability work in computer vision was very comprehensive and successful but I haven’t seen such a break through in the LLM space. To some extent, the hybrid approach of combining RAG and fine-tuning outputs perhaps serves the purpose.

Here is the link to the recording: https://www.youtube.com/watch?v=hbfXL6zRnuU

Thank you Raj Yavatkar Sharon Mandell T Sridhar and Sweta Patel for your continued support and sponsorship of AI/ML initiatives at Juniper.



要查看或添加评论,请登录

Ajit Patankar的更多文章

社区洞察

其他会员也浏览了