Did you know that Ellipsis can automatically write pull requests descriptions? The automated descriptions include context from your product roadmap and the team's recent work. As a developer pushes more commits, the summary will update, meaning that other engineers are always up-to-date on open pull requests.
Ellipsis (YC W24)
软件开发
New York,NY 2,745 位关注者
Automatically review code and fix bugs with AI
关于我们
Ellipsis will review your code (and do a good job!). Ellipsis helps engineers ship faster by providing automated code reviews and bug fixes. It's used by over 100 companies and installed in 19,000 codebases. Free 7 day trial available.
- 网站
-
https://www.ellipsis.dev/
Ellipsis (YC W24)的外部链接
- 所属行业
- 软件开发
- 规模
- 2-10 人
- 总部
- New York,NY
- 类型
- 私人持股
- 创立
- 2023
地点
-
主要
US,NY,New York,10010
Ellipsis (YC W24)员工
动态
-
Why I’m All In on Ellipsis: A Game-Changer for AI Code Reviews & Bug Fixes I just spent weekend trying out Ellipsis (YC W24) (https://www.ellipsis.dev), and I have to say - I’m thoroughly impressed. For those who haven’t heard of it yet, Ellipsis is a smart AI assistant that lives inside your PRs and helps automate everything from code reviews and bug spotting to style guide enforcement and even code generation. Think of it as your new favorite pair programming partner that never sleeps and doesn’t miss a thing. Here’s what stood out to me: ?? Instant, Context-Aware Code Reviews Ellipsis doesn’t just look for syntax issues. It catches logic bugs, smells, and even subtle anti-patterns. It understands the intent behind the code and makes recommendations that actually make sense - not just the typical "nit" suggestions. ?? Smart Q&A Right Inside the PR Need clarification on a function or architectural pattern? Just tag @ellipsis-dev in a comment, and it replies with helpful, relevant answers - almost like having your tech lead on-call 24/7. ?? Adapts to Your Style Guide You can teach Ellipsis your coding preferences in plain English. After that, it flags style violations your way, not just by some off-the-shelf linter. ?? Code Generation & Refactoring That Feels Human Ask Ellipsis to write or refactor code, and it returns fully functional, tested code snippets. Not "AI-generated spaghetti" — real, readable code that’s ready to merge (after your quick sanity check, of course). ??? Secure and Private It’s SOC 2 Type 1 certified, doesn’t store your code, and integrates directly into your GitHub/GitLab workflow with minimal setup. After using Ellipsis across a few real-world PRs, it’s earned a permanent place in my dev toolkit. It reduces the mental load, helps prevent defects earlier, and lets us focus more on architecture and problem-solving. If you're in engineering leadership, an SDET, or even a solo dev trying to scale quality without slowing down, give it a spin. There's a free trial, and I guarantee you’ll be as impressed as I was. ?? Check it out here: https://www.ellipsis.dev .... .. . #AIForDevelopers #CodeReview #DevTools #EllipsisDev #SoftwareEngineering #SDET #ProductivityTools #qa #ellipsis
-
-
Why I’m All In on Ellipsis: A Game-Changer for AI Code Reviews & Bug Fixes I just spent weekend trying out Ellipsis (YC W24) (https://www.ellipsis.dev), and I have to say - I’m thoroughly impressed. For those who haven’t heard of it yet, Ellipsis is a smart AI assistant that lives inside your PRs and helps automate everything from code reviews and bug spotting to style guide enforcement and even code generation. Think of it as your new favorite pair programming partner that never sleeps and doesn’t miss a thing. Here’s what stood out to me: ?? Instant, Context-Aware Code Reviews Ellipsis doesn’t just look for syntax issues. It catches logic bugs, smells, and even subtle anti-patterns. It understands the intent behind the code and makes recommendations that actually make sense - not just the typical "nit" suggestions. ?? Smart Q&A Right Inside the PR Need clarification on a function or architectural pattern? Just tag @ellipsis-dev in a comment, and it replies with helpful, relevant answers - almost like having your tech lead on-call 24/7. ?? Adapts to Your Style Guide You can teach Ellipsis your coding preferences in plain English. After that, it flags style violations your way, not just by some off-the-shelf linter. ?? Code Generation & Refactoring That Feels Human Ask Ellipsis to write or refactor code, and it returns fully functional, tested code snippets. Not "AI-generated spaghetti" — real, readable code that’s ready to merge (after your quick sanity check, of course). ??? Secure and Private It’s SOC 2 Type 1 certified, doesn’t store your code, and integrates directly into your GitHub/GitLab workflow with minimal setup. After using Ellipsis across a few real-world PRs, it’s earned a permanent place in my dev toolkit. It reduces the mental load, helps prevent defects earlier, and lets us focus more on architecture and problem-solving. If you're in engineering leadership, an SDET, or even a solo dev trying to scale quality without slowing down, give it a spin. There's a free trial, and I guarantee you’ll be as impressed as I was. ?? Check it out here: https://www.ellipsis.dev .... .. . #AIForDevelopers #CodeReview #DevTools #EllipsisDev #SoftwareEngineering #SDET #ProductivityTools #qa #ellipsis
-
-
?? last night a Sentry alert woke me up at 2am: one of our workers nodes used for codebase indexing was running out of memory. Turns out,?Ellipsis (YC W24) is now installed in a 4GB repository that averages one new commit every ~6 seconds ??
-
“Ellipsis’s comments feel like they came from a tech lead, not a lintern" – by which he meant a linter turned up to 11.
We recently tested a handful of AI-driven PR review tools to see how they’d help us catch bugs, ensure code quality, and free up time to focus on functionality. Here’s what we learned from our experience in a Vercel Turborepo-based monorepo (TS packages, React apps, ShadCN, Tailwind, and a Python app): CodeRabbit Great for enforcing code standards, but not strong at catching implementation bugs. devlo Delivered some very insightful suggestions, but can get noisy with too many comments—some of which weren’t as helpful. Microsoft Copilot Provided fewer comments overall but nailed the summary. We couldn’t fully automate it for every PR (possibly our error). Ellipsis (YC W24) Our favorite! It filters out unhelpful or incorrect suggestions, so you only see high-quality comments. Plus, it incorporates our own feedback and guidelines seamlessly. ?? Our take: Yes! We have already fixed bugs manual review would have missed. (Always caused by Copilot of course - I promise the code we write manually is perfect ??) ? What AI-powered review or coding tools have you tried, and how did they compare? Let’s share insights!
-
-
I must say, its really nice to have tools like Ellipsis (YC W24). Just today i was about to submit a Pull Request into BAML's compiler to improve some of our errors for how we detect duplicate names in various scopes a user types in. Ellipsis was able to detect a really nuanced edge cases that cursor had auto completed wrong. Thanks Nick Bradford! Before -> After
-
-
Bugs like this are why your team needs an AI Code Review solution. Try it for free at www.ellipsis.dev
Ellipsis (YC W24) just caught the hidden gotcha of a lifetime. And saved me from taking down prod. Again. I intended to remove "[bot]" suffixes from a string, so I used the builtin rstrip() method. I had no idea that this removes any trailing combination of the characters passed as an argument, meaning if a GitHub username ends with "bot", "tob", "obo", "boot", etc., it would also be removed! See how "manwearingboot" becomes "manwearing"? That's because "boot" is made up of characters contained in the argument I passed, "[bot]". I've been working in Python for 15 years and I had no idea rstrip() behaves like this!! Thankfully, Ellipsis caught the bug and suggested the fix -> 1 click to resolve the problem.
-
-
Last week at the Y Combinator alumni retreat, I spoke to fellow AI founders on how we built Ellipsis (YC W24) to automate code review and bug fixes + how our dev workflow changed over the last year, including: ?? Composability is key: breaking hard problems into smaller LLM agents that can be independently benchmarked ???? Evals are all you need: we're investing deeply in annotating data and LLM-as-Judge ???? LLMs can help throughout the dev workflow: for example, on failed test cases, we use an LLM Auditor to auto-diagnose where in the agent's trajectory it went of the rails ?? Reducing false positives in code reviews: we've developed an extensive filtering pipeline to decrease noise and raise trust in the comments we leave, including new ways to leverage customer feedback ...and lots more! Link to full technical deep dive in comments ??
-
-
On February 18, join Tower Research Ventures in NYC for a Conference on Synthetic Software (CSS) 2025 — a private summit on the foundations and applications of generative code for the production of performant software. This is your chance to see demos and share ideas with the practitioners, researchers, and engineers who are reshaping the development landscape. In addition to Tower Research Ventures, speakers will include representatives from Ellipsis (YC W24), Codeflash, Princeton NLP, Daytona, and more. Reserve your spot today:?https://lu.ma/k2q27yi3 #GenerativeCode #AIInnovation
-
-
Not enough software engineers realize one of the most annoying/tedious parts of code review - checking for duplicated logic in your codebase - has been basically automated away using LLMs. -> Ellipsis (YC W24)
-