Martian

Martian

软件开发

San Francisco,California 3,149 位关注者

Outperform any AI model with model routing

关于我们

Martian built the first model router, backed by $9M from NEA, General Catalyst, and Prosus Ventures. You can think of us like Google for LLMs: every time you send us a request, we automatically find and use the LLM which will give you the best result at the lowest cost. Engineers at 300+ companies, from Amazon to Zapier, have used Martian to achieve higher performance and lower costs, with greater security and reliability. The team consists of previous AI researchers at Stanford, Harvard, University of Pennsylvania, the Google Bard Team, and Microsoft Research who have previously built and sold multiple NLP companies and published in the leading AI research journals.

网站
https://withmartian.com
所属行业
软件开发
规模
11-50 人
总部
San Francisco,California
类型
私人持股
创立
2022
领域
Artificial Intelligence、AI interpretability、research、AI safety、AI cost reduction、model deployment、enterprise solutions和startups

地点

  • 主要

    301 Lyon St

    US,California,San Francisco,94117

    获取路线

Martian员工

动态

  • 查看Martian的公司主页,图片

    3,149 位关注者

    Excited to make two announcements today! -- 1. We're partnering with Accenture to power their LLM Switchboard and >$1B in Gen AI Deployments 2. We're launching Airlock, our LLM Compliance Automation tool We’re excited about working with Accenture: it's a marriage of decades of enterprise expertise with cutting edge technology. Accenture is an industry leader in generative AI deployments for enterprises and Martian’s router is build to optimize for the characteristics enterprises care about in their AI applications. Martian will plug into the Accenture Switchboard, Accenture’s internal gateway and LLM API management solution. We will then enable intelligent routing between each of the LLMs to improve performance beyond the maximum possible with a single model like GPT o1-preview and at a significantly lower cost and latency. Why is Martian such an important piece here? Using an AI API gateway by itself is like going to a stock trading application (e.g. Robinhood) where you can pick what stocks you want to buy. Using Martian is like having an AI quant trader that knows how much of each stock to buy, when to execute the trade, and can do so autonomously to get the highest possible return (e.g. Renaissance Medallion Fund). But this intelligent routing is only one piece that Martian brings to the partnership — we’re providing a suite of tools for enterprises who are building on top of AI. Our newest tool, Airlock, simplifies Gen AI compliance and provides the foundational access to newer models and vetting of those models that enterprises require. Why not build this in-house? Traditional approaches only use external information about LLMs like the inputs and outputs. Martian is a mechanistic interpretability lab and we have spent several years developing and patenting our research. The LLM router uses that research to predict the expected performance of each LLM. Our newest tool, Airlock uses the same underlying technology to look at the internal weights of the models and use those to detect negative behaviors before they cause a PR nightmare. We let you specify policies (e.g., GDPR, ZDR) or tests (e.g., running or looking inside models), which trigger when new models are released. That means 0 time from released → compliant → adopted. These moves aim to accelerate enterprise AI adoption by simplifying model integration — we let companies use every AI model, instead of being stuck with just one. You can read the full announcement here: https://lnkd.in/gHNvt-BR If you want to work on the underlying research or commercialization, reach out to us here https://lnkd.in/gWZYKUDu

  • 查看Martian的公司主页,图片

    3,149 位关注者

    Continued thoughts from Chris Mann as follow-ups to our recent article on automated prompt optimization (APO).

    查看Chris Mann的档案,图片

    AI Product Management. Former LinkedIn, IBM, Bizo, 1Password and several 0-1's. [I am NOT looking for marketing or development - engineering services]

    There are a number of automated prompt optimization (APO) techniques emerging that focus on improving the system/instruction prompt either as an activity that occurs before the prompt is put into production?? or a recursive feedback loop ? that evaluates the results of the prompt and provides suggestions to improve the system prompt based on those results. This is an interesting example of the former... ?? https://lnkd.in/e-hvszhN Here is an article I wrote that calls out a number of real-world production use cases ??for APO where these techniques could potentially be applied. https://lnkd.in/e5mVkabY

    Maitrix.org (@MaitrixOrg) on X

    Maitrix.org (@MaitrixOrg) on X

    x.com

  • 查看Martian的公司主页,图片

    3,149 位关注者

    Claude Sonnet 3.5 Release: Token Prices and Jevons Paradox ?? With the release of Claude 3.5 Sonnet, there has been a lot of press about Moore's law and the recent LLM price decreases. We put this question to our Co-Founder and Co-CEO Shriyash Upadhyay (Yash). We wanted to share his perspective on this and his view for the future as it relates to token price and token consumption. We think you will enjoy what Yash had to say. We would love to hear your thoughts on this in the comments!? https://lnkd.in/eMyBjXip

    • 该图片无替代文字
  • 查看Martian的公司主页,图片

    3,149 位关注者

    At Martian, we are fortunate to work with many of the world's most advanced users of AI. We see the problems they face on the leading edge of AI and collaborate closely with them to overcome these challenges. In this first of a three-part series, we share a view into the future of prompt engineering we refer to as Automated Prompt Optimization (APO). In this article we summarize the challenges faced by leading AI companies including Mercor, G2, Copy.ai, Autobound, 6sense, Zelta AI, EDITED, Supernormal, and others. We identify key issues like model variability, drift, and “secret prompt handshakes”. We reveal innovative techniques used to address these challenges, including LLM observers, prompt co-pilots, and human-in-the-loop feedback systems to refine prompts. We invite the broader AI community to collaborate with us on research in this area. If you are interested in participating, please reach out to us! https://lnkd.in/eyvfEe2X #AI #ArtificialIntelligence #PromptEngineering #APO #MachineLearning #AIResearch #Collaboration #Innovation #ModelVariability #ModelDrift #LLM #FutureOfAI #AICommunity #HumanInTheLoop #AIChallenges #AIsolutions

    • 该图片无替代文字
  • 查看Martian的公司主页,图片

    3,149 位关注者

    ?? ???????????????? ???????????????????????? ???? ???? ????????????????????????????????! Last few weeks have seen a surge in AI Interpretability work: Anthropic researchers work on understanding Claude 3 models [Scaling Monosemanticity in AI](https://lnkd.in/eaYkgSNM) OpenAI released work on understanding GPT-4 [Extracting concepts from GPT-4] (https://lnkd.in/gFzijQwM) What's even more fascinating is their use of ???????????? ???????????????????????? (????????). Curious why SAEs are so effective and scalable? Our research at Martian provides some compelling insights: https://lnkd.in/gkvN2cXG At Martian, we leverage category theory—a mathematical approach focusing on relationships rather than object internals—to understand why SAEs perform so well. This theory underpins our broader efforts in "model mapping," a series of methods that promise enhanced interpretability without intensive manual analysis. ?? ?????????? ??????????????: ?? ?????????????????????????? ???????????????????? ?????? ???? Our model mapping initiative acts like a microscope, revealing how AI models operate without dissecting their inner workings. This scalable approach benefits from increased computational power, paving the way for more efficient AI development. ?? ???????? ?????????????? ???? ?????? ??????????????! We're pushing the boundaries of AI interpretability and alignment. Learn more and get involved at [withmartian.com](https://withmartian.com).

    Scaling AI Interpretability

    Scaling AI Interpretability

    blog.withmartian.com

  • 查看Martian的公司主页,图片

    3,149 位关注者

    We are excited to welcome a new Martian to the team! Chris Mann is joining us as Head of Product Marketing. Chris brings a wealth of product management and product marketing experience and has worked at LinkedIn, IBM, 1Password and a number of startups. Chris wrote a compelling article titled “Why I am Joining Martian” which tells a great story about his journey to find our company and how he thinks about our strategy which we have linked here. We think you may enjoy reading this: https://lnkd.in/gg3ZsfZp

    • 该图片无替代文字
  • 查看Martian的公司主页,图片

    3,149 位关注者

    OpenAI's top safety researchers quit last week. It's easy to say this is OpenAI's fault, but it's not -- it's incentives. Capitalism and safety aren’t aligned. And capitalism always wins. To make progress on safety, we need to align those incentives. Here’s how we do it ?? Companies work on research which improves their core product. If you and your competitors both raise billions in a competitive space, but the other guy spends it on making better models, while you spend it on research about how models work, you lose. This is the fundamental cause of the disagreement between safety researchers and the leadership at companies who are developing AI. Incentives are why the problem has been systematic, as opposed to being a one-off phenomenon. Ilya Sutskever and Jan Leike are not the first safety researchers to leave top companies. @paulfchristiano and many of the folks @AnthropicAI did the same. The most important thing we can do for safety, then, is make products that improve the more we understand models. That way, the incentives are aligned.(This is actually why Martian’s mission is to “Make Better AI Tools By Understanding AI Better”.) What do such tools look like? One example is a model router: choosing what llm to use for what prompt. Routing is about predicting model performance without running the models first -- that way you send the request to the right model. To do this, you need to understand models. We’re sure many people will try to build routers. In fact, we’ve spoken to some who’ve tried and given up because it’s so hard. In order to make routers that are truly the best, you need to understand how these models work. That's what a strong incentive looks like. Our belief is that there are many such products waiting to be made. AI tooling could be far better than it is today -- with far more control over models, far more traceability in their internal operations, and far more applications as a result. At Martian, we're conducting research into how models work. But it can't just be a research project. It needs to be based around commercial applications of the technology. Read more here: https://lnkd.in/gXJB2tYt And if you want to work with us to understand AI, reach out: [email protected]

    Incentives for AI Safety

    Incentives for AI Safety

    blog.withmartian.com

  • 查看Martian的公司主页,图片

    3,149 位关注者

    ????????? ???????????? ???? ????: ?????????????????? ?????????????????????? ?????? ???????????????????????????? GPT-4o, our latest frontier model, is now live on Martian platform for our users. It's faster, cheaper, and more efficient! Efficient models like GPT-4o are the need of the hour! At Davos 2024, Sam Altman highlighted a critical issue:?the future energy demands of AI could overwhelm global systems. It’s not just about training models; millions of interactions with models like GPT-4 for inference also consume significant energy! ?? At Martian, we're addressing this challenge head-on with our innovative Model Router. This tool is designed to significantly reduce both energy use and costs for AI inference and training—by up to 97%! Here’s how we do it: ?????????????? ?????????? ??????????????????: Our router directs each task to the most efficient model, often leveraging smaller models while maintaining high-quality outputs. By stopping computations at "good enough," we avoid the excessive energy costs of massive models. ???????????????????????? ??????????????????????: Users can balance cost, energy use, and performance, selecting cheaper, faster models for quick tasks and powerful ones for critical outputs. Not just for inference, but improved model routing capabilities also optimize the model training process. ?? Major industry players are also uniting for a greener AI future. The?Green Software Foundation (https://lnkd.in/deGYhAy), led by Accenture, Microsoft, and others, aims to reduce emissions by 45% by 2030. Optimizing both AI training and inference is key to this mission. ?? ???????? ???? ???????? ???????????? ???????? ???????????? ???? ??????????????? Visit our?Martian blog?to see here: https://lnkd.in/g6bPQzRV Join us in building powerful, efficient, and responsible AI systems. The future is bright—and green! ??

    The Sustainability Challenge of AI: Tackling the Energy Footprint of LLMs

    The Sustainability Challenge of AI: Tackling the Energy Footprint of LLMs

    blog.withmartian.com

  • 查看Martian的公司主页,图片

    3,149 位关注者

    Neural networks have revolutionized the world of AI, but their complexity often makes them opaque and difficult to understand. At Martian, we're tackling this challenge head-on by developing innovative methods to make black-box models transparent, editable, and verifiable. We call this approach "model mapping," and it's already powering over 1,000 businesses through our model router. Just like how biologists study an owl by examining its physiology (insides) and ecology (relationships), we're doing biology on AI. We're leveraging the power of category theory, a deep branch of mathematics that studies the world through the relationships between objects. By transferring knowledge from well-understood fields like programming to black boxes like neural networks, we're shedding light on the inner workings of AI. The impact of model mapping goes beyond just transparency. It enables us to: 1. Ensure AI alignment, making sure models do what we want 2. Optimize efficiency, making AI as fast and sustainable as possible 3. Enhance adaptability, allowing us to fix mistakes or add capabilities as needed 4. Improve accessibility, helping more people understand and trust AI At Martian, we're already using these techniques in our model router, delivering value to our customers by understanding models and matching the best AI to complete a given prompt. Our goal is to make understanding AI models more profitable than simply building new ones. Interested in learning more about how model mapping can make AI more transparent? Check out our full article (https://lnkd.in/gx7Ch3ep) and see how you can get involved or collaborate with us. We're excited to work together to advance the understanding of AI! ?? And if you're curious about the applications of this technology, visit withmartian.com to explore our model router and get in touch.

    • 该图片无替代文字

相似主页

融资