Cody Enterprise: Introducing Context Filters plus updates to context windows and LLM choice

Cody Enterprise: Introducing Context Filters plus updates to context windows and LLM choice

Cody is an AI coding assistant that helps devs understand, write, and fix code faster. Cody Enterprise uses Sourcegraph code search under the hood to pull context from enterprise-scale codebases and help devs quickly find answers about their entire remote codebases. This week, we’re rolling out several improvements to Cody Enterprise, making it more secure for sensitive code and improving its use of context.

We’re introducing Context Filters, safeguards to prevent sensitive code from being sent to third-party LLM providers. We’re also increasing the amount of context that can be passed into Cody, so users can ask questions about larger files (or larger portions of their codebase) that previously exceeded Cody’s context limit. The JetBrains IDE extension has also been significantly upgraded and is now generally available.

Safeguard your code with Cody Context Filters

A common refrain we hear from enterprises interested in AI coding assistants is their concerns about data privacy, IP protection, and the risk of exposing or compromising sensitive code. They don’t want their most sensitive data sent to third-party LLM providers, where it could get leaked or used for training purposes.

Cody Context Filters let you choose which repositories Cody can and cannot use as the context in its requests. More importantly, they prevent sensitive code from being passed to third-party LLMs.

By default, Cody Enterprise has no restrictions on the repos it can use for context in requests to third-party LLMs, but admins can configure Context Filters via the cody.contextFilters field with include or exclude rules.

  • Include: Cody can only use content from repositories with names matching specified patterns.
  • Exclude: Cody can use all repositories for context except for those with names matching the specific patterns.

When both include and exclude rules are specified, Cody will use content from any repository that matches the include patterns but does not match any exclude filter. In the example below, Cody can access content from repositories with names starting with github.com/sourcegraph, but it will exclude files from the /cloud repository.

Context Filters are available for Cody Enterprise customers in JetBrains, VS Code, and the Cody web client. To access it, you’ll need to meet the following requirements:

  • A valid Cody Enterprise license and running Sourcegraph version >=5.4.0
  • A Cody client meeting the following version specs: VS Code >=1.20.0 or JetBrains >=6.0.0
  • The cody-context-filters-enabled feature flag set to true

You can find more information on Context Filters and requirements in the docs.

ChatGPT-4o is now available for Cody Enterprise

In a fast-moving field like LLMs, it’s important to have access to the latest and greatest on the market. This access to LLMs is core to how we build Cody, and for security-conscious enterprises, we provide the same data privacy and protections regardless of the LLM you use.

Hot on the heels of our support for Claude 3, Cody Enterprise now supports ChatGPT-4o. ChatGPT-4o is OpenAI’s latest model. It’s two times faster than GPT-4 Turbo and performs better than its general reasoning benchmark score.

ChatGPT-4o will be available in the Sourcegraph release coming later this week, and admins can configure their instance to use it once they’ve upgraded. You can also try it out and compare it against other models today in our LLM Litmus Test at Sourcegraph Labs.

Smarter and larger context windows for Claude 3 Opus, Sonnet, and ChatGPT-4o

We recently announced larger context windows for Cody Free and Cody Pro users, and we’re bringing these updated context windows to Cody Enterprise in this week’s Sourcegraph release.

For Claude 3 Sonnet and Opus models, we’ve increased the maximum input and output context limits:

  • 30,000 input tokens of user-defined context (user @-mentioned files)
  • 15,000 input tokens of continuous context (user messages and context that's sent to the LLM to help it recall earlier parts of a continuous conversation)

This update means two things:

  • You can now push way more context into Cody, including multiple large files, so that you can ask questions about larger amounts of code
  • You can have much longer back-and-forth chats with Cody before it starts to forget the context from earlier in the conversation

We’ve also increased the output token limit to 4,000 tokens for all messages generated by Cody’s underlying LLMs. This means you shouldn’t see Cody's responses getting cut off mid-answer anymore. This output limit update applies to all models.

Note for BYOK customers: The increase in context limits can increase the number of tokens per response, leading to higher LLM costs. You can optionally configure your own context limits in the site admin configuration.

Cody is now generally available for JetBrains IDEs

Another core philosophy behind building Cody is ensuring it works where you work. Specifically for enterprises, we know many of you get your work done in JetBrains IDEs, and we’re excited to announce that Cody for JetBrains is now generally available.

The GA extension brings a long list of updates to JetBrains, including all-new commands, inline code editing, multi-repo context search, and many improvements to performance and stability. We’ve also added proxy support for enterprise environments.

For more information about all of the updates to the JetBrains extension, read our announcement blog.

Enterprises using Cody to empower their dev teams

We’re excited to be working alongside several enterprises already adopting Cody and putting AI into the hands of their dev teams, including the Qualtrics DevX team,? who use Cody to take fewer trips out of their IDE, answer questions about complex code faster, and speed up unit test coverage.?

Meanwhile, the Leidos team is using Cody to triple the time spent on value-added tasks like writing code and tests. We look forward to continuing our work with enterprise teams to help pro developers code better and faster using AI.

Get started with Cody Enterprise

If you’re interested in using Cody Enterprise, contact us. Existing Cody Enterprise customers should upgrade to the latest Sourcegraph release, available later this week, to take advantage of the new features we’ve announced today.

You can also find this post on our website.

MD. ARIFUL ISLAM

Best SEO Expert, Google Ads expert in Bangladesh and CEO Of Outsourcing Service BD

9 个月

Dear owner of the?Sourcegraph??YouTube channel, I hope you are fine. I am Mahir. I have just visited your YouTube channel. Your channel is perfect. Video editing quality is also good. But have you noticed your channel has some problems? I have already added some screenshots of your problems.?I made an audit for your channel. Channel Audit Insights: SEO Optimization: Your channel is not fully optimized for SEO, which is crucial for improving your visibility on YouTube. YouTube won't recommend your videos and your video will be lost on the YouTube algorithm. Video Views: Despite having numerous videos, your view counts are relatively low. Keyword Utilization: Your videos are missing high-volume and relevant keywords, limiting their discoverability. Social Media Integration: Your videos are not being shared across other social media platforms like Facebook, Twitter, and LinkedIn, which can drive more traffic and engagement. Do you want to reach your channel at a stage? Don't you want to grow your video views and seo rank keywords? And make your channel more YouTube algorithm-friendly. If you're going to customize your channel then we can discuss and do a meeting on Zoom or voice or call

  • 该图片无替代文字
回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了