Modelcode AI的封面图片
Modelcode AI

Modelcode AI

软件开发

Modelcode AI (www.modelcode.ai) is the world leader in Code Modernization through Generative AI.

关于我们

Modelcode.ai, Inc. employs innovative, proprietary Generative AI to automate Code Modernization. The company, founded and led by Silicon Valley veteran entrepreneurs, engineers, and well-known AI experts with multiple exits, focuses on solving tech debt, including, for example, large-scale refactoring, the transformation of legacy monolithic code/service to modular code/micro-services, C# to Java translation, Python 2 to Python 3 upgrades, automated unit testing, and documentation generation.

网站
https://modelcode.ai
所属行业
软件开发
规模
2-10 人
类型
私人持股
领域
Generative AI、Artificial Intelligence、Code Modernization、Unit Testing和Language to Language Translation

Modelcode AI员工

动态

  • Modelcode AI转发了

    查看Michael Fertik的档案
    Michael Fertik Michael Fertik是领英影响力人物

    Serial Entrepreneur and Venture Capitalist michaelfertik.substack.com "Robinhood of the blogosphere, Sherlock Holmes 2.0 of Databanks" - Handelsbatt

    Modelcode.ai opens Modelcode Chai in Israel and hires like crazy! Here we go! After investing in almost twenty startups in Israel since 2015 (including three unicorns!), and after hiring thousands of people across the US, UK, India, Germany, etc., this is the first time I have ever myself opened a company in Israel. Applied AI in Israel. This is technology/country fit.

  • Modelcode AI转发了

    查看Antoine Raux的档案

    Co-founder / Chief AI Officer at modelcode.ai

    Evaluating LLM-Powered Code Modernization: The Real Challenge At Modelcode AI, we’re building autonomous agents to modernize large enterprise codebases—an ambitious task that raises unique evaluation challenges. Traditional GenAI code benchmarks like SWE-Bench and HumanEval rely on automated tests for correctness, but they assume isolated, well-scoped problems. Code modernization, on the other hand, spans dozens or hundreds of interdependent files, making evaluation far more complex. A Unique Advantage: Real-World Data Fortunately, we have access to one of the most valuable sources of evaluation data—real world projects. Each modernization project provides a starting state (the original repository), a defined goal (e.g., migrating from AngularJS to React or refactoring into a microservices architecture), and a reference end state (the final, customer-accepted repository), which are all critical components of our evaluation approach. As we work with long-term, recurring customers, we can systematically leverage our internal metrics data in a fully secure and private way to track improvements in our platform, ensuring that each iteration of our AI agents delivers better outcomes for every project take on. This real-world feedback loop gives us an unparalleled ability to refine and validate our approach beyond what synthetic benchmarks can offer. Our Two-Tiered Approach to Evaluation: 1?? Task-Level Evaluation – We break projects into smaller tasks (e.g., translating a single file) and evaluate them in isolation. By leveraging modern source control systems to create and recreate repository states at will, this approach enables fast, reproducible testing. 2?? Full-Project Evaluation – While task-level metrics are key to understand the core agent functionality, they miss critical platform capabilities like task decomposition, planning, and error recovery. To measure true impact, we also re-run full modernization projects, predicting with higher fidelity our ability to handle future customer projects. What Metrics Matter? - Functional Correctness – Does the code work? Compiling and passing unit tests is a start, but enterprise codebases often lack test coverage. We use a mix of deterministic checks and LLM-based evaluation against our reference implementation. - Maintainability & Style – Code needs to be readable, consistent, and easy to review. We leverage LLM judges (validated by human reviewers) to assess how well the new code integrates with existing repositories. By combining fast iteration with deep, real-world validation, we’re building an evaluation framework that drives measurable improvements to modelcode’s AI-powered code modernization platform. How do you evaluate GenAI for code at scale? Let’s discuss! ??

  • Modelcode AI转发了

    查看Nir Ben Israel的档案

    Founding team @ modelcode.ai | AI & LLMs Leader | Algo-trader

    ?? ?????????????? ???????? ?????????? ???????????? ?? Over the past month, my LinkedIn feed has been dominated by one word: agents. There's no doubt that agents are the hottest topic in AI right now, and for good reason. While SOTA models (as we know them today) may be approaching their potential limits, agents unlock the next level of capability by leveraging these models in dynamic, tool-driven environments. One area where agents truly shine is code generation. I recently heard a friend make a point that truly resonated with me: "Give a developer a Google Doc containing several files and ask them to edit and create a working PR. No one could realistically accomplish this without an IDE and its tools—and yet, this is what we expect LLMs to do? LLMs need agents equipped with the right tools to succeed." At Modelcode AI, we specialize in automatic code modernization using LLMs, and agents are at the core of our technology stack. ????????'?? ???? ?? ?????????? on building agentic systems for code generation: ??. ???????????? ?????????????? ???????????????????????????? The most effective systems should mimic the synergy of a development team. Think multiple agents interacting seamlessly—delegating tasks, collaborating, and reviewing each other's work. For example: A code-review agent for quality checks, a research agent for identifying dependencies and understanding context, and a QA agent to test and validate code changes. This collaborative "team of agents" approach is essential for tackling complex modernization tasks. ??. ???????????????? ?????? ???????????????????? ???????????? ???? ???????????? Agentic runs are, by nature, stochastic. Sometimes they might diverge and get stuck in an endless loop. A good approach will be to execute multiple agentic pipelines (as suggested in this paper), and to implement mechanisms to evaluate and select the best option. Alternatively you can recognize when the process is going off track, and add the ability to roll back to a validated checkpoint. This resilience and adaptability is a critical component. ??. ???????? ?????????? = ?????????????? ???????????? Developers rely on countless tools to get their job done—agents should have the same. Beyond basic tools for file manipulation, compilation, and testing, think about tools that mimic human developer behavior: "Right-clicking" to get a symbol’s definition, glancing into a file to answer a specific query, or debugging tools for deeper insights.?State-of-the-art LLMs excel when given the right tools. The more closely these tools align with what a human developer would use, the more powerful the agents become. Of course, the devil is in the details, and this field evolves daily. At Modelcode AI, we pride ourselves on staying at the forefront of these advancements. If you’re as excited about the future of agents as we are—and believe you have what it takes to help shape it—we want to hear from you. DM me, let’s chat. #AI #Agents #CodeModernization #LLM

  • Modelcode AI转发了

    查看Antoine Raux的档案

    Co-founder / Chief AI Officer at modelcode.ai

    Interested in building real world GenAI for coding technology that eliminates technical debt across the industry? Come work with us at Modelcode AI!

    查看Michael Fertik的档案
    Michael Fertik Michael Fertik是领英影响力人物

    Serial Entrepreneur and Venture Capitalist michaelfertik.substack.com "Robinhood of the blogosphere, Sherlock Holmes 2.0 of Databanks" - Handelsbatt

    KLAXON: Actively hiring top flight engineers, ML, AI in USA and Israel. Tell everyone. Join me at modelcode.ai, Inc.

相似主页

查看职位