Interesting Content in AI, Software, Business, and Tech- 9/6/2023

Interesting Content in AI, Software, Business, and Tech- 9/6/2023

A lot of people reach out to me for reading recommendations. I figured I'd start sharing whatever AI Papers/Publications, interesting books, videos, etc I came across each week. Some will be technical, others not really. I will add whatever content I found really informative (and I remembered throughout the week). These won't always be the most recent publications- just the ones I'm paying attention to this week. Without further ado, here are interesting readings/viewings for 9/6/2023.?If you missed last week's readings, you can find it here.

Community Spotlight- Leah Rietveld

Leah Rietveld is a career matchmaker with an amazing newsletter for job seekers. Every week, she shares career tips and open jobs for her readers (this week she shared 100 open roles!!). She has tons of openings for software engineers, AI people, and Web3 Devs. If you're a job seeker, would highly recommend signing up to her newsletter here. Here are some of the open roles she shared with me that are very actively hiring right now (there were more but I couldn't screenshot them all)-

PS- I share open job roles on my IG fairly regularly (few jobs/week) here. Check it out if you're looking for work. We also have a very interesting project coming up to help you look for work soon. Keep your eyes out for that.

If you're doing interesting work and would like to be featured in the spotlight section, just drop your introduction in the comments/by reaching out to me. There are no rules- you could talk about a paper you've written, an interesting project you've worked on, some personal challenge you're working on, ask me to promote your company/product, or anything else you consider important. The goal is to get to know you better, and possibly connect you with interesting people in our chocolate milk cult. No costs/obligations are attached.

Highly Recommended

I'm going to add this section to highlight pieces that I feel are particularly well done. If you don't have much time, make sure you atleast catch these works.

Driving Over Miss Daisy

This is one of the few pieces I believe will be valuable to you, no matter who you are. Bring up interesting questions regarding automated decision-making.

You are stretched out comfortably in your fully self-driving car, speeding across the road. You don’t have to sit any more, since cars are now designed for maximum comfort and utility, not based on having a human driver.

Suddenly, an enormous concrete building section topples over in the back of the construction truck driving in front of you. In a fraction of a second, this giant obstacle gives your car’s AI a simple decision: brake or dodge.

To your left is an SUV with a human passenger.

To your right is an unprotected motorcyclist on a motorcycle.

Straight ahead is certain death.

What should your AI do?

Read more

Game Designer Explains: Why Onix is an AMAZING Pokémon!

Apparently, the specs given to Onix make it terrible for competitive player use. A game designer explains why these specific stats actually make it perfect for gameplay. All software developers, designers, and technical architects should take notes on how the specs of Onix make it perfect for the "show don't tell philosophy".

Onix is a Pokémon which has earned itself an unfortunate reputation for being terrible over the years. Some people even consider it one of the worst Pokémon of Gen 1. However, I disagree. From my perspective, I think it's one of the best Pokémon designed from Gen 1. So let's go for a dive into the history and game design of Pokémon Red, Blue and Yellow, to shed some light on why Onix was designed the way it was.

Watch here

Complex Adaptive Systems (Stonk Market) and How to Beat Them

Benjamin is one of the best YouTube channels if you're interested in finance and investing. He has a rare ability to explain very complex and dry subjects using satire, dry humor, and lots of memes. His content is both funny and insightful, so would highly recommend checking it out.

Watch here

11 Of The Most Faked Foods In The World | Big Business | Insider Business

It's crazy how companies have lobbied hard to mislabel food and sell that to normal customers like us (this is called fraud). Time to do something about it.

Hate to break it to you, but your truffle oil wasn't made from truffles. Your vanilla extract? Well, that's probably just a lab-made derivative of crude oil. And your shaker of Parmesan cheese? It probably has wood pulp inside. You might feel the companies behind these food products are using deceptive packaging —?but it's legal. However, there's a whole other level of trickery that's completely illegal: food fraud. That's when criminals bottle up corn syrup and call it 100% honey, or when they pass off cheap mozzarella as pure Parmigiano-Reggiano. Globally, the fraudulent food industry could be worth $40 billion. It hurts legitimate producers, funds criminal activities, and can even harm consumers. We head around the world to uncover how producers get away with food deception and how we can spot the real stuff.

Watch here

AI Writeups

Ahead of AI #11: New Foundation Models

In this edition of the newsletter, we direct our attention to one of the most prominent highlights of the summer: the release of the Llama 2 base and chat models, as well as CodeLlama, the latest highlights in the open-source AI large language model (LLM) landscape.

Additionally, we delve into the leaked GPT-4 model details, discussing an analysis of its performance over time and covering emerging alternatives to the prevalent transformer-based LLMs.

Read more

Faster gaze prediction with dense networks and Fisher pruning

Predicting human fixations from images has recently seen large improvements by leveraging deep representations which were pretrained for object recognition. However, as we show in this paper, these networks are highly overparameterized for the task of fixation prediction. We first present a simple yet principled greedy pruning method which we call Fisher pruning. Through a combination of knowledge distillation and Fisher pruning, we obtain much more runtime-efficient architectures for saliency prediction, achieving a 10x speedup for the same AUC performance as a state of the art network on the CAT2000 dataset. Speeding up single-image gaze prediction is important for many real-world applications, but it is also a crucial step in the development of video saliency models, where the amount of data to be processed is substantially larger.

Read more

Shoutout to Logan Thorneloe

How Politicians make money from inflation and what that means for System Design

We have seen multiple theories pitched for this (wage stagnation and cost of living crisis)- lack of unionization, price gouging, automation. However, one thing that stands out here is the inaction taken by political leaders to effectively stop these problems from spiraling out of hand. Between the trillion dollar+ bailouts to companies, predatory student loan policies, and loosening worker protections- it seems like lawmakers are trying to make things worse. So what gives?

In this article, I will lay out a simple, game-theory-based argument for why this happens and how politicians profit from this. After that, we will cover this problem that shows up all the time when it comes to designing systems (especially with AI) and how we can tackle it.

Read more

Giraffe: Adventures in Expanding Context Lengths in LLMs

Modern large language models (LLMs) that rely on attention mechanisms are typically trained with fixed context lengths which enforce upper limits on the length of input sequences that they can handle at evaluation time. To use these models on sequences longer than the train-time context length, one might employ techniques from the growing family of context length extrapolation methods -- most of which focus on modifying the system of positional encodings used in the attention mechanism to indicate where tokens or activations are located in the input sequence. We conduct a wide survey of existing methods of context length extrapolation on a base LLaMA or LLaMA 2 model, and introduce some of our own design as well -- in particular, a new truncation strategy for modifying the basis for the position encoding.

We test these methods using three new evaluation tasks (FreeFormQA, AlteredNumericQA, and LongChat-Lines) as well as perplexity, which we find to be less fine-grained as a measure of long context performance of LLMs. We release the three tasks publicly as datasets on HuggingFace. We discover that linear scaling is the best method for extending context length, and show that further gains can be achieved by using longer scales at evaluation time. We also discover promising extrapolation capabilities in the truncated basis. To support further research in this area, we release three new 13B parameter long-context models which we call Giraffe: 4k and 16k context models trained from base LLaMA-13B, and a 32k context model trained from base LLaMA2-13B. We also release the code to replicate our results.

Read more

Generalized BackPropagation, étude De Cas: Orthogonality

This paper introduces an extension of the backpropagation algorithm that enables us to have layers with constrained weights in a deep network. In particular, we make use of the Riemannian geometry and optimization techniques on matrix manifolds to step outside of normal practice in training deep networks, equipping the network with structures such as orthogonality or positive definiteness. Based on our development, we make another contribution by introducing the Stiefel layer, a layer with orthogonal weights. Among various applications, Stiefel layers can be used to design orthogonal filter banks, perform dimensionality reduction and feature extraction. We demonstrate the benefits of having orthogonality in deep networks through a broad set of experiments, ranging from unsupervised feature learning to fine-grained image classification.

Read more

Tech Writeups

Google Cloud's Core Principles of System Design

A robust system design is secure, reliable, scalable, and independent. It lets you make iterative and reversible changes without disrupting the system, minimize potential risks, and improve operational efficiency. To achieve a robust system design, we recommend that you follow four core principles.

Read more

Why Netflix integrated a Service Mesh in their Backend

Netflix is a video streaming service with over 240 million users. They’re responsible for 15% of global internet traffic (more than YouTube, which comes in at 11.4%).

The company is known for their strong engineering culture. Netflix was one of the first adopters of cloud computing (starting their migration to AWS in 2008), a pioneer in promoting the microservices architecture and also created the discipline of chaos engineering (we wrote an in-depth guide on chaos engineering that you can check out here).

A few days ago, developers at Netflix published a fantastic article on their engineering blog explaining how and why they integrated a service mesh into their backend.

In this article, we’ll explain what a service mesh is, what purpose it serves and delve into why Netflix adopted it.

Read more

Cool Videos

Why Saudi Arabia Pours Billions into Sport Stars (And It Makes Sense)

Sometimes as economists we need to analyse complex decisions to understand what the outcomes might be. On the more baffling end of the scale we have Saudi Arabia spending billions on football, golf and Formula 1 stars who are mostly a little past their prime to come and play in the largely unknown Saudi leagues. Is this simply sports-washing (a PR stunt) or is there something more clever going on?

Watch here

The History of the Natural Logarithm - How was it discovered?

Today we define the natural logarithm as a logarithm with the base e and many people, understandably, wonder why! Interestingly the natural logarithm was discovered decades before the number e. In fact it was discovered before the link between logarithms and exponentials was recognized! In this video I talk about how and why logarithms were invented, how the natural logarithm arise from from logarithmic tables without the need of the number e and how studying the area of the hyperbola was instrumental in defining logarithms and the natural logarithm.

Watch it here

If you liked this article and wish to share it, please refer to the following guidelines.

I'll catch y'all with more of these next week. In the meanwhile, if you'd like to find me, here are my social links-

Reach out to me

Use the links below to check out my other content, learn more about tutoring, reach out to me about projects, or just to say hi.

Small Snippets about Tech, AI and Machine Learning over here

AI Newsletter- https://artificialintelligencemadesimple.substack.com/

My grandma’s favorite Tech Newsletter- https://codinginterviewsmadesimple.substack.com/

Check out my other articles on Medium. : https://rb.gy/zn1aiu

My YouTube: https://rb.gy/88iwdd

Reach out to me on LinkedIn. Let’s connect: https://rb.gy/m5ok2y

My Instagram: https://rb.gy/gmvuy9

My Twitter: https://twitter.com/Machine01776819

Logan Thorneloe

Software Engineer at Google

1 年

Thanks for the shoutout! Gaze prediction is such an amazing use of ML that I would never have thought of. Also, I have to agree with Golden Owl about Onix being done dirty. At least they made Steelix a little bit competitively viable.

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了