pytorch vision once for all.ipynb

pytorch vision once for all.ipynb

Once-for-all models are from Once for All: Train One Network and Specialize it for Efficient Deployment. Conventional approaches either manually design or use neural architecture search (NAS) to find a specialized neural network and train it from scratch for each case, which is computationally prohibitive (causing CO2 emission as much as 5 cars' lifetime) thus unscalable. In this work, we propose to train a once-for-all (OFA) network that supports diverse architectural settings by decoupling training and search. Across diverse edge devices, OFA consistently outperforms state-of-the-art (SOTA) NAS methods (up to 4.0% ImageNet top1 accuracy improvement over MobileNetV3, or same accuracy but 1.5x faster than MobileNetV3, 2.6x faster than EfficientNet w.r.t measured latency) while reducing many orders of magnitude GPU hours and CO2 emission. In particular, OFA achieves a new SOTA 80.0% ImageNet top-1 accuracy under the mobile setting (<600M MACs).


Get supernet

You can quickly load a supernet as following

# ofa_supernet_proxyless

super_net = torch.hub.load('mit-han-lab/once-for-all', super_net_naimport torch super_net_name = "ofa_supernet_mbv3_w10" # other options: # ofa_supernet_resnet50 / # ofa_supernet_mbv3_w12 / me, pretrained=True).eval()

OFA NetworkDesign SpaceResolutionWidth MultiplierDepthExpand


Train once, specialize for many deployment scenarios



Consistently outperforms MobileNetV3 on Diverse hardware platforms

OFA-ResNet50 [How to use]



要查看或添加评论,请登录

Ashkar Alhan (Ash Allen)的更多文章

社区洞察

其他会员也浏览了