This AI newsletter is all you need?#15
Towards AI
Making AI accessible to all with our courses, blogs, tutorials, books & community.
What happened this week in AI by?Louis
Tons of new research is being released because of ICLR 2023’s paper submission deadline. Many others are coming in the next couple of days, so stay tuned. One I am highlighting below is DreamBooth, a fantastic approach to adapting text-to-image models to specific subjects. Another is DreamFusion, a text-to-3D approach, one dimension further than text-to-images adding a coordinate and lots of complexities to the problem. This is very similar to Meta’s most recent publication Make-A-Video, also released last week, which is a text-to-video approach also in 3 dimensions. Both of these approaches, even though very different, heavily rely on 2D diffusions and adapt them to another dimension, using another spatial dimension for a 3D effect, or a temporal dimension for videos. I already covered Make-A-Video in an article on Towards AI and expect me to cover both DreamBooth and DreamFusion shortly for sure!
What you’ve been waiting for is coming soon. I just sent a private message to the 10 lucky winners of the NVIDIA Deep Learning Institute Gift Cards for our giveaway! The GPU winner will be announced in the next iteration of my personal newsletter in partnership with NVIDIA!
Hottest News
Most interesting papers of the?week
Enjoy these papers and news summaries? Get a daily recap in your inbox!
The Learn AI Together Community section!
Meme of the?week!
Happened to all of us ????. Meme shared by friedliver#0614.
Featured Community post from the?Discord
As most of you already know, last Friday was Tesla AI Day 2022, the second iteration of this new annual event by Tesla featuring the advancements and projects Tesla worked on and is working on as well as Q&A, all hosted by Elon Musk and his team. If you couldn’t attend, you can watch the rediffusion, but it’s 3.5 hours long. Fortunately, a fantastic member of our community, tomi.in.ai, wrote a summary for us:
Tesla Humanoid Robot — Optimus
Optimus would cost less than $20,000 and the prototype was developed “technically” within a year. It has 11 dof (degrees of freedom), and hands that can carry a 20lb (9 kg) bag, and can easily be overpowered by a human. Its vision is based on similar techniques used in Tesla autopilot and has a damage control system (should it fall face down) similar to Tesla vehicles collision structure.
Full Self Driving (FSD)/ Autopilot
FSD’s customer base increased from 2,000 to 160,000 between 2021 and 2022.
The model training powerhouse consists of 3 supercomputers with 14,000 GPUs each (4,000 GPUs for auto labeling and 10,000 GPUs for training). Tesla has trained 75,777 models and shipped 281. It uses a PyTorch extension for accelerated video library, resulting in a 30% increase in training speed. Automated 3D labeling powered by multi-trip reconstruction replaces 5 million hours of manual labeling with 12 hours on clusters for 10,000 trips. A data engine identifies mispredictions (e.g. a parked car it thinks is about to cross an intersection), corrects the label and categorizes clips into evaluation sets. 13,900 clips have been re-labeled so far, keeping the data dynamic. The dojo system processor is pretty advanced, unfortunately I can’t summarize as I understand little about this. You may have to watch this section of the presentation for context.
Some of Musk’s remarks during Q&A:
“Optimus is going to be incredible in 5 years, 10 years will be mind-blowing… and I’m really interested to see that happen, I hope you are too.”
“We’re gonna get a lot of sh*t done, it’s gonna be really cool, and it’s not gonna be easy. But if you are a super talented engineer, your talents will be used to a greater degree than anywhere else.”
领英推荐
AI poll of the?week!
What is your preferred way of learning? Join the discussion on Discord.
TAI Curated?section
Article of the?week
In this article, the author explains and simplifies the LogBERT method for detecting anomalies in log sequences. The major part of BERT, the transformer block, is dissected in this article. The embedding layer comes first, followed by position encoding, self-attention, and, finally, multi-head attention. The LogBERT is relatively simple to understand thanks to the graphical representations, mathematical equations, and suitable examples.
Our must-read articles
If you are interested in publishing with Towards AI, check our guidelines and sign up. We will publish your work to our network if it meets our editorial policies and standards.
Ethical Take by Lauren on Meta’s contribution to Molly Russell’s death
I would be remiss to not address the biggest news in tech ethics of late: the recent British ruling of Meta’s responsibility in the death of 14-year-old Molly Russell in 2017. The ruling is solely related to the cause of her death, pursued by Molly’s family to raise awareness over the devastating shortcomings of child Internet safety. The coroner presiding over the case concluded that Meta and other social media companies contributed to her death, which is possibly the first time legal blame for a death has been put on social media companies.
This case sets an important legal precedent for other families whose children died similarly and for future child protection legislation. It’s a very difficult and sensitive area that we are forced to explore as a society — what can be shown and what shouldn’t be, how to let kids be themselves and keep them safe. Answering these questions requires a great deal of care and honesty for the world we’re leaving for the next generation. It demands not shying away from the horrors that youths are exposed to constantly on platforms without sufficient protection, horrors that reduced a courtroom of adults to tears and nightmares. Moving fast and breaking things has consequences, and this tragedy is one of many. As a former teenage girl on Instagram, I am grateful for everyone advocating for increased youth rights and protections online and look forward to the advancements to come.
Job offers
Interested in sharing a job opportunity here? Contact [email protected] or post the opportunity in our #hiring channel on discord!
If you are preparing your next machine learning interview, don’t hesitate to check out our leading interview preparation website, confetti!
If you are new to Machine Learning & AI you might find these 2 URLs very helpful: climbtheladder.com/machine-learning-engineer-cover-letter/#:~:text=I%20am%20excited%20to%20be,valuable%20asset%20to%20your%20team…. I did find this point very important and that is why I'm sharing it here: 2.#Showcase your Skills in your cover letter: The author left out #Linux skills in particular 'strong knowledge of writing command lines (either #bash or #shell)'= strong #terminal usage and #shell scripting. Can you list other ML skills omitted? https://brainstation.io/career-guides/how-to-become-a-machine-learning-engine Enjoy AI & ML! ??
Next Trend Realty LLC./ Har.com/Chester-Swanson/agent_cbswan
2 年Thank you for Sharing.