Large Language Models and Networking: What are the Challenges and Opportunities?
It was an honor to represent 瞻博网络 at Networking Channel (https://networkingchannel.eu) forum discussion on LLMs in Computer Networking industry. The forum was moderated by noted Networking expert Jim Kurose of University of Massachusetts Amherst and attended by several hundred people across the world. The panelist included ThirdAI co-founder Shrivastava Anshumali , Prof. Jon Crowcroft of 英国剑桥大学 , Elisa Bertino of 美国普渡大学 , Prof. Victor O. K. Li of 香港大学 and coordinated by Maglavera Stavroula .
Anshu very well summarized the current best practices based on his academic and industry experience:
Prof. Bertoni described innovations that are at the interface of both LLM and Networking industry such as SPEC5G — a dataset of NL sentences specific to 5G. She also highlighted a potential security risk if Gen-AI coding assistants are not used with careful oversight. Prof. Crowcroft challenged the LLM community with four observations:
Prof. Li also touched upon his work in Causality in LLM space.
领英推荐
My talk highlighted the comprehensive Gen-AI framework at Juniper that balances data and legal risks with potential business benefits. This framework intrinsically addresses the key observations by Anshu. Amongst the many use cases, in my opinion Gen-AI coding assistants are likely to be the most successful applications. Towards the risks and challenges identified by other panelists, Juniper framework emphasizes a ‘human-in-the-loop’ for many current Gen-AI applications. For example, the coding assistants may suggest code with security risks or copyright issues, but there need to be guard-rails that minimize these risks. Furthermore, these risks are not unique to coding assistants — human developers introduce them at all the time. There is no need to throw out the baby with the bath water. I am certainly with Prof Crowcroft on the non-explainability of LLMs. The explainability work in computer vision was very comprehensive and successful but I haven’t seen such a break through in the LLM space. To some extent, the hybrid approach of combining RAG and fine-tuning outputs perhaps serves the purpose.
Here is the link to the recording: https://www.youtube.com/watch?v=hbfXL6zRnuU
Thank you Raj Yavatkar Sharon Mandell T Sridhar and Sweta Patel for your continued support and sponsorship of AI/ML initiatives at Juniper.