Now LLM – Generative LLM for enterprise AI use-cases

Now LLM – Generative LLM for enterprise AI use-cases

With the recently announced Vancouver release, we are releasing Now LLM, ServiceNow large language models for enterprise domain use-cases.

The generative AI highlights for the initial GA release includes an interactive Q&A capability for requestors/end-users to get answers from relevant knowledge corpuses, incident/case & chat summarization capabilities for customer support/IT agents for quicker handoffs/resolutions and assistive code generation for developers increasing their productivity.

To deliver Now LLM, we have utilized some of the best-in-class models as foundational models, including this one, where ServiceNow research partnered with Hugging face in the development of the pre-trained model, and the recently announced partnership with Nvidia as well as other leading open source models. Depending on the usage scenarios, we fine tune and deliver proprietary, custom models which are specific to our domains and use-cases.

This is in continuation of our investment in AI strategy and natural language technologies and builds on our prior work with language models for language understanding; this has been made possible with the rapid advances in technology, and auto-regressive/generative language models becoming mainstream.

To power these use-cases, Now LLM has been tuned appropriately to provide a quality response which results in an improved experience for users. Depending on the specific usage scenario, some or all of these steps are undertaken to deliver the right model.

  1. Extended Pre-training – Making the models suitable for enterprise domains.
  2. Instruction fine tuning – Fine tune using domain/use-case specific data annotated with instructions.
  3. Dialog fine tuning – Fine tuning to allow for users to get answers through multi-turn interactions delivered through a conversational interface/experience.
  4. Retrieval Augmented Generation - Improving the quality of LLM generated?responses by grounding the model on customer specific sources of knowledge and data.
  5. User feedback – Improving the model performance based on human feedback in the product, including both implicit and explicit signals

Now LLM has been made possible by significant efforts spanning engineering, research, product, QE and design teams as well as datacenter operation teams, building and supporting the underlying GPU infrastructure in our datacenters, all made possible in a record time.

While we continue to learn from customers and accelerate on the features we are delivering to customers, we are only scratching the surface with the initial release, and we have an exciting roadmap ahead of us.

Good to see, Jeroen van Gassel. Such a promising addition to the ServiceNow platform for generative AI in enterprise domain use-cases!

Debu Chatterjee

CoFounder and CEO of Konfer | Agentic AI Platform for Regulatory Compliance | Confident AI | Transparency | Trust | Truth

1 年

So good to see these enahancements to the ServiceNow AI offerings. Congratulations Shiv and every one else on the ServiceNow team for your Vancouver release.

要查看或添加评论,请登录

Shiv Ramanna的更多文章

社区洞察

其他会员也浏览了