How YOU Can Help Make AI Accessible to Everyone

How YOU Can Help Make AI Accessible to Everyone

We announced a collaborative pre-training project called L.I.T to train AI Models for everyone, not just the gatekeepers! Researchers published a self-supervised learning cookbook, OpenAI launched a prompt engineering course for developers, and GitLab released a new AI-driven security feature. Let’s dive in!

Open Source Highlight: L.I.T (Large-scale Infinite Training)

No alt text provided for this image

Today, we are thrilled to announce L.I.T to empower the collective to create the best models for the good of mankind! We're calling on individuals and institutions with spare compute capacity who want a world where AI is open and used for the benefit of all.

Who should join?

Individuals and institutions with spare compute capacity who want a world where AI is open and used for the benefit of all.

Click here to get involved!

Research Highlights:

No alt text provided for this image

  • Meta researchers published a research paper on self-supervised learning, which they describe as the "dark matter of intelligence" and a promising avenue for advancing machine learning. However, they note that successfully training SSL methods can be a delicate and challenging process, with a high barrier to entry. The researchers aim to address this issue by providing a "cookbook" style guide to SSL, which includes the latest SSL recipes and lays the foundations for researchers to navigate the terrain of methods and understand the various parameters involved in training. The researchers hope that their work will empower curious researchers to explore the potential of self-supervised learning.
  • A new research paper provides a comprehensive guide for practitioners and end-users who work with Large Language Models (LLMs) in their natural language processing tasks. The paper offers insights into the usage of LLMs from the perspectives of models, data, and downstream tasks. The guide includes discussions on the influence of pre-training data, training data, and test data, as well as use cases and non-use cases for various NLP tasks. The paper also explores the importance of data, the impact of spurious biases on LLMs, and other essential considerations for deploying LLMs in practice, such as efficiency, cost, and latency. The authors hope that their comprehensive guide will provide researchers and practitioners with valuable insights and best practices for working with LLMs in a wide range of NLP tasks.
  • Topological Deep Learning (TDL) is a comprehensive framework introduced by researchers to extract knowledge from complex systems such as social networks and protein structures. The TDL has the potential to revolutionize various applied sciences as it offers theoretical and practical advantages. However, a lack of unification in notation and language across Topological Neural Network (TNN) architectures poses a challenge for building upon existing works and deploying TNNs to real-world problems. To address this issue, the researchers provide an accessible introduction to TDL and compare recently published TNNs using a unified mathematical and graphical notation. The paper offers valuable insights into current challenges and exciting opportunities for the future development of TDL.

ML Engineering Highlights:

  • Arize AI launched Phoenix, an open-source library that monitors LLMs for hallucinations. Phoenix flags when models produce poor responses and visualizes clusters of data problems. The software claims to also help with data drift issues.
  • DeepLearning.AI, in partnership with OpenAI, has launched a new short course called ChatGPT Prompt Engineering for Developers. The course will teach developers how to use LLMs to build new applications using the OpenAI API. Participants will learn how LLMs work and how to engineer effective prompts for tasks like summarizing, inferring, transforming text and expanding, as well as building custom chatbots. The course claims to be suitable for both beginner and advanced machine learning engineers.
  • GitLab announced a new AI-driven security feature that explains potential vulnerabilities to developers and plans to expand the feature to automatically resolve vulnerabilities using AI in the future. The new feature explains vulnerabilities within the context of a code base, combining information about the vulnerability with insights from the user’s code, making remediation of issues easier and faster.

Tutorial of the Week

No alt text provided for this image

Using LLMs in an efficient and effective manner is becoming more and more important. In this tutorial, Sebastian Raschka breaks down Low-Rank Adaptation (LoRA) & how to finetune a LLaMA model in a computationally efficient manner.

Community Spotlight

No alt text provided for this image

Stevica Kuharski used Lit LLaMA to create Plotcrafting– a retro-inspired writing assistant for modern storytellers. This model runs on a single RTX 3090 and communicates with a web app via web sockets.

Don’t Miss the Submission Deadline

  • BMVC 2023: 34th annual conference on machine vision, image processing, and pattern recognition. Nov 20 - 24, 2023. (Aberdeen, United Kingdom). Submission deadline: Fri May 12 2023 16:59:59 GMT-0700
  • NeurIPS 2023: 37th conference on Neural Information Processing Systems. Dec 10 - 16, 2023. (New Orleans, Louisiana). Submission deadline: Wed May 17 2023 13:00:00 GMT-0700
  • ICCVS 2023: The 14th International Conference on Computer Vision Systems. Sep 27-29, 2023. (Vienna, Austria). Submission Deadline: Mon May 29 2023
  • ICMLA 2023L: The 22nd International Conference on Machine Learning and Applications. Dec 15-17, 2023. (Jacksonville, Florida). Submission Deadline: Sat Jul 15 2023

Want to learn more from Lightning AI? “Subscribe” to make sure you don’t miss the latest flashes of inspiration, news, tutorials, educational courses, and other AI-driven resources from around the industry. Thanks for reading!

要查看或添加评论,请登录

Lightning AI的更多文章

社区洞察

其他会员也浏览了