2-Min AI Newsletter #13
Asif Razzaq
AI Research Editor | CEO @ Marktechpost | 1 Million Monthly Readers and 56k+ ML Subreddit
???? ?? Latest AI/ML Research Highlights?
A team of researchers at the?Complutense University of Madrid ?have?developed the first processor core ?implementing the posit standard in hardware and showed that, bit-for-bit, the accuracy of a basic computational task increased by up to four orders of magnitude, compared to computing using standard floating-point numbers. They presented their results at last week’s?IEEE Symposium on Computer Arithmetic .
Salesforce researchers have developed LAVIS (short for LAnguage-VISion), an open-source library for training and evaluating state-of-the-art language-vision models on a rich family of common tasks and datasets and for off-the-shelf inference on customized language-vision data. This will make the emerging language-vision intelligence and capabilities available to a wider audience, encourage practical adoption, and reduce repetitive efforts in future development.
Recently, a research team from Harvard University and Microsoft showed for the first time that neural networks could autonomously discover concise learning algorithms in polynomial time. The group suggested an architecture that reduces parameter size from even trillions of nodes down to a constant by combining recurrent weight-sharing between layers with convolutional weight-sharing. Additionally, their research raises the possibility that RCNNs’ synergy may be more effective than either technique.
With the posting in June of a think piece?on the Open Review server , LeCun offered a broad overview of an approach he thinks holds promise for achieving human-level intelligence in machines.?Implied if not articulated in the paper is the contention that most of today's big projects in AI will never be able to reach that human-level goal. In a discussion this month with?ZDNet?via Zoom, LeCun made clear that he views with great skepticism many of the most successful avenues of research in deep learning at the moment.
领英推荐
In the new paper Human-level Atari 200x Faster, a DeepMind research team applies a set of diverse strategies to Agent57, with their resulting MEME (Efficient Memory-based Exploration) agent surpassing the human baseline on all 57 Atari games in just 390 million frames — two orders of magnitude faster than Agent57.
JupyterLab is fundamentally intended to be an extendable environment. Any component of JupyterLab can be enhanced or customized using JupyterLab extensions. New themes, file viewers and editors, or renderers enabling rich outputs in notebooks are some of the things they can offer. Keyboard shortcuts, settings in the system, and items to the menu or command panel can all be added via extensions. Extensions can depend on other extensions and offer an API for use by other extensions.?