- Highlights of this release include user-friendly features in Keras to help you develop transformers, deterministic and stateless initializers, updates to the optimizers API, and new tools to help you load audio data.
- They also made performance enhancements with oneDNN, expanded GPU support on Windows, and more.
- This release also marks TensorFlow Decision Forests 1.0
- The latest benchmark tests of chips for AI showed new approaches to dealing with ever-increasing scale, including clusters of machines, and sparsity, the pruning of neural nets to make them more efficient.
- The reported results featured some important milestones, including the first-ever benchmark results for Nvidia's "Hopper" GPU,?unveiled in March.
- Chinese cloud giant Alibaba submitted the first-ever reported results for an entire cluster of computers acting as a single machine, blowing away other submissions in terms of the total throughput that could be achieved.?
- Neural Magic?showed how it was able to use "pruning," a means of cutting away parts of a neural network, to achieve a slimmer piece of software that can perform just about as good as a normal program would but with less computing power needed.
It is worth mentioning three significant business challenges about data protection:
- First, companies have to find out how to provide safe access to large datasets for their scientists to train ML models that provide novel business value.?
- Second, as part of their digital transformation efforts, many companies tend to migrate their ML processes (training and deployment) to cloud platforms where they can be more efficiently handled at a large-scale. However, exposing the data those ML processes consume to the cloud platform comes with its own associated data risks.
- Third, organizations that want to take advantage of third-party ML-backed services must currently be willing to relinquish ownership of their sensitive data to the provider of those services.?
- Upcoming "Hopper" GPU broke records in its MLPerf debut, according to Nvidia.
- Nvidia?announced?yesterday that its upcoming?H100?"Hopper" Tensor Core GPU set new performance records during its debut in the industry-standard?MLPerf?benchmarks, delivering results up to 4.5 times faster than the?A100, which is currently Nvidia's fastest production AI chip.
- When it comes to large AI models, remarkable performance in a wide range of applications often brings a big budget for hardware and running costs.?As a result, most AI users, like researchers from startups or universities, can do nothing but get overwhelmed by striking news about the cost of training large models.
- Fortunately, because of the help from the open source community, serving large AI models became easy, affordable and accessible to most.?Now you can see how incredibly the 175-billion-parameter OPT performs in text generation tasks, and do it all online for free, without any registration whatsoever!