This AI newsletter is all you need #18

This AI newsletter is all you need #18

What happened this week in AI by Louis

You must’ve heard about Stability AI and its recent $101 million funding for open-sourced AI, which is such good news for all of us in AI. As exciting as it can be, this shows how much power open-source can have and how promising a company based around that is. I hope this kind of news will propel the open-source way for our field and I deeply hope the big companies will follow this path. I believe giving knowledge away (for free) is much more valuable to a company in the end than selling a product that you are the only one contributing to, where the open-source community ends up catching up to you, forcing you to iterate and compete constantly. In today’s world, you can get help from anyone, anywhere, so why not do so? Why not help science progress and get your products even better by having thousands of hands getting into your work and improving it as Stability AI did with Stable Diffusion? Look what it did. Stable Diffusion is even better and more popular than DALLE now.

Speaking of which, what is even more exciting than open-source getting funding and support right now is diffusion-based models. There are new papers every day using such diffusion models to generate images, generate 3D models, generate videos, edit images from sketches or from text and more. It is just incredible and has completely replaced GANs or transformers, but are they scalable? I’d love to know what you guys think. Is diffusion here to stay and build upon, or is it just another step towards a better approach to come within a few months?

Hottest News

  1. Google’s new AI can hear a snippet of song—and then keep on playing. This new technique is based on a paper and framework called AudioLM, used for high-quality audio generation with long-term consistency. It maps the input audio to a sequence of discrete tokens and casts audio generation as a language modeling task in this representation space!
  2. A dataset of 5,85 billion CLIP-filtered image-text pairs! The dataset LAION-5B is 14x bigger than LAION-400M, previously the biggest openly accessible image-text dataset in the world.
  3. Deloitte’s fifth edition of its State of AI in the Enterprise research report is out! Deloitte “surveyed more than 2,600 global executives on how businesses and industries are deploying and scaling artificial intelligence (AI) projects. Most notably, the Deloitte report found that while AI continues to move tantalizingly closer to the core of the enterprise – 94% of business leaders agree that AI is critical to success over the next five years – for some, outcomes seem to be lagging.”

Most interesting papers of the week

  1. Compressed Vision for Efficient Video Understanding “We propose a framework enabling research on hour-long videos with the same hardware that can now process second-long videos.”
  2. TOKEN MERGING: YOUR VIT BUT FASTER. A simple method to increase the throughput of existing ViT (visison transformer) models without needing to train.
  3. Museformer: Transformer with Fine- and Coarse-Grained Attention for Music Generation. A Transformer with a novel fine- and coarse-grained attention for music generation. Transformer with a novel fine- and coarse-grained attention for music generation.

Enjoy these papers and news summaries? Get a daily recap in your inbox!

The Learn AI Together Community section!

Meme of the week!

No alt text provided for this image

That is simply amazing work by AI—now that's what I call fresh sashimi! Meme shared by dimkiriakos#2286.

Featured Community post from the Discord

No alt text provided for this image

Shubham Trivedi just shared a notebook on market segmentation and is looking for your feedback! If you have a few minutes available, please have a look and let him know what you think (Shubham.Trivedi#1648 on discord)!

https://www.kaggle.com/code/shubhamptrivedi/market-segmentation-for-online-healthcare-provider

AI poll of the week!

No alt text provided for this image

Join the discussion on Discord.

TAI Curated section

Towards AI Tutorials

At Towards AI, we are big fans of Tutorial & Practical content with code and examples to help learn, understand and implement AI and Machine Learning models and applications. In pursuit of this goal, we have commissioned a series of blog posts (with accompanying Python code and Google Colab files) on various Machine Learning algorithms and topics from our Technical Editor Pratik Shukla. The first (of many) mini series focuses on the Gradient Descent algorithm, from fundamental principles through to a comparison of the variants with elaborated code examples in Python:

  1. The Gradient Descent Algorithm
  2. Mathematical Intuition behind the Gradient Descent Algorithm
  3. The Gradient Descent Algorithm & its Variants

We are also establishing a new Tutorials & Practicals page on our website, where we will host this type of content, both published by our own editorial team and submitted from our contributors, and work to improve its organization and accessibility. We also want to provide a better forum for authors and readers to interact on this content and for readers to ask questions or help as they work through the project. As such, we are also establishing a new Towards AI Tutorials and Practicals forum in our Learn AI Together Community. So please join if you have any questions, let’s learn together!

Article of the week

Gradient Descent Optimization by Rohan Jagtap

For another perspective on Gradient Descent, by chance this week one of our contributors published another Tutorial together with implementation of a gradient descent optimizer in TensorFlow.?

Our must-read articles

GELU: Gaussian Error Linear Unit Code (Python, TF, Torch) by Poulinakis Kon

Overview of Computer Vision Tasks & Applications by Youssef Hosni

Why Should Euclidean Distance Not Be the Default Distance Measure? by Harjot Kaur

If you are interested in publishing with Towards AI, check our guidelines and sign up. We will publish your work to our network if it meets our editorial policies and standards.

Job offers

Machine Learning Engineer, Copilot Model Improvements @ Github (Remote, US)

Sr. Data Scientist, Marketplace and Merchandising @ Splic.com (Remote, US)

Machine Learning Engineering Manager @ Verana Health (Remote)

AI Implementation Manager (Healthcare) @ ClosedLoop (Remote)

Machine Learning Engineer, Exploratory Projects, Information Abstraction @ Cohere (Flexible)

Machine Learning Engineer @ Weights and Biases (Remote)

Senior/Staff Machine Learning Engineer, Infrastructure & Earnin (Remote)

Interested in sharing a job opportunity here? Contact [email protected] or post the opportunity in our #hiring channel on discord!

If you are preparing your next machine learning interview, don’t hesitate to check out our leading interview preparation website, confetti!

No alt text provided for this image
Adham Alkhadrawi (PhD)

Postdoc Researcher | Biochemical Engineer | Biotechnology Innovator | Synthetic Biology Expert

1 年

Here is what I got from stable diffusion with prompt "salmon in the river" my result is better.. but your result looks more delicious ??

  • 该图片无替代文字
回复
Juan Kehoe

Senior Data Science Specialist @ Deloitte | Transforming Data into Insights

1 年

It's technically correct. I don't see a problem.

回复
CHESTER SWANSON SR.

Next Trend Realty LLC./ Har.com/Chester-Swanson/agent_cbswan

1 年

Thanks for posting.

Arthur Filgueiras

AI strategist, benevolent maverick and a rebel with a cause.

1 年
回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了