AI at Meta

AI at Meta

研究服务

Menlo Park,California 864,502 位关注者

Together with the AI community, we’re pushing boundaries through open science to create a more connected world.

关于我们

Through open science and collaboration with the AI community, we are pushing the boundaries of artificial intelligence to create a more connected world. We can’t advance the progress of AI alone, so we actively engage with the AI research and academic communities. Our goal is to advance AI in Infrastructure, Natural Language Processing, Generative AI, Vision, Human-Computer Interaction and many other areas of AI enable the community to build safe and responsible solutions to address some of the world’s greatest challenges.

网站
https://ai.meta.com/
所属行业
研究服务
规模
超过 10,001 人
总部
Menlo Park,California
领域
research、engineering、development、software development、artificial intelligence、machine learning、machine intelligence、deep learning、computer vision、engineering、computer vision、speech recognition和natural language processing

动态

  • 查看AI at Meta的公司主页,图片

    864,502 位关注者

    Starting today, open source is leading the way. Introducing Llama 3.1:?Our most capable models yet. Today we’re releasing a collection of new models including our long awaited 405B. Llama 3.1 delivers stronger reasoning, a larger 128K context window & improved support for 8 languages including English — among other improvements. Details in the full announcement ?? https://go.fb.me/hvuqhb Download the models ?? https://go.fb.me/11ffl7 We evaluated performance across 150+ benchmark datasets across a range of languages — in addition to extensive human evaluations in real-world scenarios. Trained on >16K NVIDIA H100 GPUs, Llama 3.1 405B is the industry leading open source foundation model and delivers state-of-the-art capabilities that rival the best closed source models in general knowledge, steerability, math, tool use and multilingual translation. We’ve also updated our license to allow developers to use the outputs from Llama models — including the 405B — to improve other models for the first time. We’re excited about how synthetic data generation and model distillation workflows with Llama will help to advance the state of AI. As Mark Zuckerberg shared this morning, we have a strong belief that open source will ensure that more people around the world have access to the benefits and opportunities of AI and that’s why we continue to take steps on the path for open source AI to become the industry standard. With these releases we’re setting the stage for unprecedented new opportunities and we can’t wait to see the innovation our newest Llama models will unlock across all levels of the AI community.

  • 查看AI at Meta的公司主页,图片

    864,502 位关注者

    Join us for 30h of hacking with Llama in Bengaluru! In addition to prizes at the event, top projects will also receive support to submit applications for our 2024?Llama Impact Grants where they will have the chance to win a $500k grant to support their work.

    查看Reskilll的公司主页,图片

    10,712 位关注者

    ?? It's time to bring your ideas to life! The AI Hackathon with Meta Llama is now live, a groundbreaking platform where innovation meets creativity. On 19th-20th October, get ready to collaborate with brilliant minds, solve real-world challenges, and create something truly extraordinary. Whether it's building for Bharat, integrating Llama into WhatsApp, or pushing the boundaries of edge computing, this is your chance to shine! ?? Register now: - https://lnkd.in/dJE_RnTs Mark your calendars, prepare to innovate, and let's shape the future together.?? #BuildWithLlama #InnovationUnleashed #AIHackathon2024?? ?? Venue: WeWork India Galaxy, Bengaluru AI at Meta Meta for Developers Reskilll Ojasvi Bhatia Amit Sangani Rohit Sardana Punit Jain Azkar Uddin Khan Anamika Sharma Ankit Dhadda

    • 该图片无替代文字
  • 查看AI at Meta的公司主页,图片

    864,502 位关注者

    We're at #INTERSPEECH2024 — if you're on the ground in Greece this week, stop by our booth to explore SeamlessExpressive, MAGNeT, EMG and more with our research teams! ?? Following the conference from your feed? Here are links to five papers we're presenting to add to your reading list. 1. Learning Fine-Grained Controllability on Speech Generation via Efficient Fine-Tuning: https://go.fb.me/httkop 2. Navigating the Mine Field of MT Beam Search in Cascaded Streaming Speech Translation: https://go.fb.me/3v42k5 3. Configurable Field of View Speech Enhancement with Low Compute and Low Distortion for AR Glasses: https://go.fb.me/wdb9gt 4. Towards measuring fairness in speech recognition: https://go.fb.me/y1k4kw 5. MSRS: Training Multimodal Speech Recognition Models from Scratch with Sparse Mask Optimization: https://go.fb.me/0dxi3j

    • 该图片无替代文字
    • 该图片无替代文字
    • 该图片无替代文字
    • 该图片无替代文字
    • 该图片无替代文字
  • 查看AI at Meta的公司主页,图片

    864,502 位关注者

    We're still accepting proposals for our LLM Evaluations Grant until this Friday, September 6th — we encourage submissions that utilize evaluations in the areas of complex reasoning, emotional & social intelligence and agentic behavior. Selected recipients will get $200K in funding to pursue further innovation in this important area of work. We're accepting proposals through September 6th and you can find the full details here ?? https://go.fb.me/u1xjkk

    • 该图片无替代文字
  • 查看AI at Meta的公司主页,图片

    864,502 位关注者

    Open source AI is the way forward and today we're sharing a snapshot of how that's going with the adoption and use of Llama models. Read the full update here ?? https://go.fb.me/mfc5ki ?? Highlights ? Llama is approaching 350M downloads on Hugging Face. ? Our largest cloud service providers have seen Llama token usage more than double since May. ? Llama models are being adopted across the industry with great examples from Accenture, AT&T, DoorDash, Goldman Sachs, Infosys, KPMG, Niantic, Inc., Nomura, Shopify, Spotify and Zoom as just a handful of strong examples. Open source AI is how we ensure that the benefits of AI extend to everyone, and Llama is leading the way.

    • 该图片无替代文字
  • 查看AI at Meta的公司主页,图片

    864,502 位关注者

    New research from Meta FAIR: Transfusion: Predict the Next Token and Diffuse Images with One Multi-Modal Model. This recipe combines next token prediction with diffusion to train a single transformer over mixed-modality sequences. Our experiments show that Transfusion scales significantly better than traditional approaches and demonstrate that scaling to 7B parameters and 2T multi-modal tokens produces a model that is on par with similar scale language and diffusion models. More details in the full research paper ?? https://go.fb.me/4vnybn

    • 该图片无替代文字

关联主页

相似主页