Data Science articles I have read in week (w/c 18/10/21)
Facebook together with 13 universities are looking to develop person centric AI. Most video footage taken is from a spectator point of view. This data is not suitable to train a robot that actually has to perform tasks in person. The solution Facebook’s AI team is envisioning requires first person video footage that will be collected by more than 700 individuals across the globe. AR glasses trained with this data are hoped to help us find misplaced items, remind us if we are using the same ingredient when cooking twice or jog our memory about a conversation we had. The video in the article is a great summary of what Ego4D is all about.
I love plotly charts. They are just beautiful and it’s great to interact with them. However, data can be messy and the chart might not look as pretty as in the examples. This blog post is a summary with python code examples of different plot types to visualise your data in a meaingful way.
领英推荐
I’m not sure there is a conclusion drawn in the article. The author explores semantics around statistics and machine learning arguing that machine learning is not nonparametric statistics. Interesting points for me were that the author thinks machine learning is about prediction whereas statistics is concerned with probabilities and randomness. I’m not sure I fully understand the arguments made but there is a link to his book in the article that could be an informative read. Also, a reference is made to history and it is argued that prediction was around before statistics invented by astronomers who were using pattern matching.
I didn’t read the article but the headline and abstract caught my attention. I have never considered measuring worst-case accuracy. I guess we can look at confidence and credible intervals but specifically think about the worst-case performance sounds like a powerful transformative tool to me. This could also help build trust with customers when introducing a machine learning tools.?
I came across an article discussing how self-supervised learning can help to make progress in medical image recognition. I wasn’t sure what was meant by the phrase and did some digging and found this introduction from Facebook AI. Self-supervised learning is related to unsupervised learning and takes supervising signals from data that doesn’t have labels. Examples are predicting the next word in a sentence or a frame from the previous image. Self-supervised learning is useful for pre-training the model with background information before a supervised method is employed on a domain specific data set. In the case of the medical image recognition article, the domain specific data is medical images and self-supervised learning would happen on a variety of images.