This AI newsletter is all you need?#15

This AI newsletter is all you need?#15

What happened this week in AI by?Louis

Tons of new research is being released because of ICLR 2023’s paper submission deadline. Many others are coming in the next couple of days, so stay tuned. One I am highlighting below is DreamBooth, a fantastic approach to adapting text-to-image models to specific subjects. Another is DreamFusion, a text-to-3D approach, one dimension further than text-to-images adding a coordinate and lots of complexities to the problem. This is very similar to Meta’s most recent publication Make-A-Video, also released last week, which is a text-to-video approach also in 3 dimensions. Both of these approaches, even though very different, heavily rely on 2D diffusions and adapt them to another dimension, using another spatial dimension for a 3D effect, or a temporal dimension for videos. I already covered Make-A-Video in an article on Towards AI and expect me to cover both DreamBooth and DreamFusion shortly for sure!

What you’ve been waiting for is coming soon. I just sent a private message to the 10 lucky winners of the NVIDIA Deep Learning Institute Gift Cards for our giveaway! The GPU winner will be announced in the next iteration of my personal newsletter in partnership with NVIDIA!

Hottest News

  1. Tesla AI Day 2022 was last Friday! If you missed it or just couldn’t manage to stay there for 3 hours, no worries; an amazing member of our community, tomi.in.ai, wrote a summary for us in the community section below!
  2. OpenAI removed the waitlist for DALL·E! Sign up and start creating! OpenAI may have sensed some pressure thanks to other alternatives getting better and better like Stable Diffusion, or maybe they are finally confident enough that the results won’t hurt anyone. Either way, they decided to go forward and remove the waitlist for their DALL·E image generation model. You can now use it right away!
  3. An AI that generates videos from text! Make-A-Video by Meta AI Meta AI’s new model Make-A-Video is out and in a single sentence: it generates videos from text. It’s not only able to generate videos, but it’s also the new state-of-the-art method, producing higher quality and more coherent videos than ever before! Learn more.

Most interesting papers of the?week

  1. DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation A new approach for “personalization” of text-to-image diffusion models (specializing them to users’ needs) by fine-tuning a pre-trained text-to-image model such that it learns to bind a unique identifier with that specific subject.
  2. Improving alignment of dialogue agents via targeted human judgements An information-seeking dialogue agent trained with reinforcement learning (from human feedback) to be more helpful, correct, and harmless compared to prompted language model baselines.
  3. EMB-GAM: AN INTERPRETABLE AND EFFICIENT PREDICTOR USING PRE-TRAINED LANGUAGE MODELS They use a pre-trained language model to extract embeddings for each input before learning a linear model in the embedding space (a generalized additive model (GAM), a transparent and interpretable linear function of its input features and feature interactions).

Enjoy these papers and news summaries? Get a daily recap in your inbox!

The Learn AI Together Community section!

Meme of the?week!

No alt text provided for this image

Happened to all of us ????. Meme shared by friedliver#0614.

Featured Community post from the?Discord

As most of you already know, last Friday was Tesla AI Day 2022, the second iteration of this new annual event by Tesla featuring the advancements and projects Tesla worked on and is working on as well as Q&A, all hosted by Elon Musk and his team. If you couldn’t attend, you can watch the rediffusion, but it’s 3.5 hours long. Fortunately, a fantastic member of our community, tomi.in.ai, wrote a summary for us:

Tesla Humanoid Robot — Optimus

Optimus would cost less than $20,000 and the prototype was developed “technically” within a year. It has 11 dof (degrees of freedom), and hands that can carry a 20lb (9 kg) bag, and can easily be overpowered by a human. Its vision is based on similar techniques used in Tesla autopilot and has a damage control system (should it fall face down) similar to Tesla vehicles collision structure.

Full Self Driving (FSD)/ Autopilot

FSD’s customer base increased from 2,000 to 160,000 between 2021 and 2022.

The model training powerhouse consists of 3 supercomputers with 14,000 GPUs each (4,000 GPUs for auto labeling and 10,000 GPUs for training). Tesla has trained 75,777 models and shipped 281. It uses a PyTorch extension for accelerated video library, resulting in a 30% increase in training speed. Automated 3D labeling powered by multi-trip reconstruction replaces 5 million hours of manual labeling with 12 hours on clusters for 10,000 trips. A data engine identifies mispredictions (e.g. a parked car it thinks is about to cross an intersection), corrects the label and categorizes clips into evaluation sets. 13,900 clips have been re-labeled so far, keeping the data dynamic. The dojo system processor is pretty advanced, unfortunately I can’t summarize as I understand little about this. You may have to watch this section of the presentation for context.

Some of Musk’s remarks during Q&A:

“Optimus is going to be incredible in 5 years, 10 years will be mind-blowing… and I’m really interested to see that happen, I hope you are too.”

“We’re gonna get a lot of sh*t done, it’s gonna be really cool, and it’s not gonna be easy. But if you are a super talented engineer, your talents will be used to a greater degree than anywhere else.”

AI poll of the?week!

No alt text provided for this image

What is your preferred way of learning? Join the discussion on Discord.

TAI Curated?section

Article of the?week

LogBERT Explained In Depth: Part I by David Schiff

In this article, the author explains and simplifies the LogBERT method for detecting anomalies in log sequences. The major part of BERT, the transformer block, is dissected in this article. The embedding layer comes first, followed by position encoding, self-attention, and, finally, multi-head attention. The LogBERT is relatively simple to understand thanks to the graphical representations, mathematical equations, and suitable examples.

Our must-read articles

A Gentle Intro to AWS ML Related Services — Sentiment Analysis With AWS by Kaan Boke Ph.D.

A Chatbot With the Least Number of Lines Of Code by Chinmay Bhalerao

If you are interested in publishing with Towards AI, check our guidelines and sign up. We will publish your work to our network if it meets our editorial policies and standards.

Ethical Take by Lauren on Meta’s contribution to Molly Russell’s death

I would be remiss to not address the biggest news in tech ethics of late: the recent British ruling of Meta’s responsibility in the death of 14-year-old Molly Russell in 2017. The ruling is solely related to the cause of her death, pursued by Molly’s family to raise awareness over the devastating shortcomings of child Internet safety. The coroner presiding over the case concluded that Meta and other social media companies contributed to her death, which is possibly the first time legal blame for a death has been put on social media companies.

This case sets an important legal precedent for other families whose children died similarly and for future child protection legislation. It’s a very difficult and sensitive area that we are forced to explore as a society — what can be shown and what shouldn’t be, how to let kids be themselves and keep them safe. Answering these questions requires a great deal of care and honesty for the world we’re leaving for the next generation. It demands not shying away from the horrors that youths are exposed to constantly on platforms without sufficient protection, horrors that reduced a courtroom of adults to tears and nightmares. Moving fast and breaking things has consequences, and this tragedy is one of many. As a former teenage girl on Instagram, I am grateful for everyone advocating for increased youth rights and protections online and look forward to the advancements to come.

Job offers

Research Scientist in ML for Climate Modeling @ The Allen Institute for AI (Hybrid Remote)

Senior AI Software Engineer @ Spot AI (Remote)

Senior Machine Learning Researcher for Copilot @ Github (Remote)

Senior Software Engineer @ Captur (Remote, +/- 2 hours UK time)

Machine Learning Apprentice @ HingeHealth (Remote)

Senior ML Ops Engineer @ BenchSi (Remote)

ML Research Intern @ Genesis Therapeutics (Burlingame, CA)

Interested in sharing a job opportunity here? Contact [email protected] or post the opportunity in our #hiring channel on discord!

If you are preparing your next machine learning interview, don’t hesitate to check out our leading interview preparation website, confetti!

If you are new to Machine Learning & AI you might find these 2 URLs very helpful: climbtheladder.com/machine-learning-engineer-cover-letter/#:~:text=I%20am%20excited%20to%20be,valuable%20asset%20to%20your%20team…. I did find this point very important and that is why I'm sharing it here: 2.#Showcase your Skills in your cover letter: The author left out #Linux skills in particular 'strong knowledge of writing command lines (either #bash or #shell)'= strong #terminal usage and #shell scripting. Can you list other ML skills omitted? https://brainstation.io/career-guides/how-to-become-a-machine-learning-engine Enjoy AI & ML! ??

回复
CHESTER SWANSON SR.

Next Trend Realty LLC./ Har.com/Chester-Swanson/agent_cbswan

2 年

Thank you for Sharing.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了