Presentations from the "Implementing AI: Vision Systems" Webinar
This was the third event in the Implementing AI series of webinars being run by the Knowledge Transfer Network and eFutures.
The recording of the full event is available at: Event Recording
Programme
Scalable Quantized Neural Network Inference on FPGAs with FINN and LogicNets
Yaman Umuroglu, Xilinx
Mixing machine learning into high-throughput, low-latency edge applications needs co-designed solutions to meet the performance requirements. Quantized Neural Networks (QNNs) combined with custom FPGA dataflow implementations offer a good balance of performance and flexibility, but building such implementations by hand is difficult and time-consuming. In this talk, we will introduce FINN, an open-source experimental framework by Xilinx Research Labs to help the broader community explore QNN inference on FPGAs. Providing a full-stack solution from quantization-aware training to bitfile, FINN generates high-performance dataflow-style FPGA architectures customized for each network. We will also introduce LogicNets, the newest member of the FINN ecosystem. Through circuit-network co-design, LogicNets enables nanosecond-latency and performance in the hundreds of millions of samples per second.
Towards Reliable AI-Powered Vision for Autonomous Systems
Professor Tughrul Arslan, The University of Edinburgh
Vision systems in harsh application domains such as aerospace (e.g., aircrafts, drones, UAVs, space vehicles) and nuclear power plants are often required to be (semi)autonomous, mostly because of limited human intervention and situational awareness. Therefore, such autonomous systems require reliability, underscoring the need for high-performance, real-time, and fault-tolerant computing capabilities. Meanwhile, the requirements for high performance and reliability in a vision system are often conflicting. For instance, with conventional procedural computing approach, achieving high reliability often comes at the expense of performance. However, modern mobile technologies and reconfigurable hardware provide opportunities for building efficient AI-powered vision systems for autonomous operations. The talk will start with the discussion of work done on embedded vision system using dynamically reconfigurable architectures for mobile systems and will then go into discussing current projects on reliable AI vision for autonomous system. The talk will focus on work on various applications including computing architecture for dynamic image processing; and hardware architectures for machine learning in vision systems (e.g., for robotic vision and navigation).
Scalable AI Solution cross AI platforms
Aling Wu, AAEON & Sebastian Borchers, Wahtari
The challenges for developers today in AI development are AI model training and deployment. To bridge the gap, an OS with long-term security updates, remote deployable APPs and deep neural networks, end-to-end encryption, user-friendly AI inference (nGin) and optimized hardware platforms shall be taken care and considered at once. In this webinar, we are going to share what we have done and how it can be beneficial for you.
2020 vision - the journey from research lab to real-world product
Jag Minhas, CEO & Founder, Sensing Feeling
Sensing Feeling delivers advanced human behaviour IoT sensing products powered by Computer Vision and Machine Learning. Supported by Innovate UK, R/GA Ventures, Telefónica, BBH and VGC Partners, Sensing Feeling was born from advanced industrial research and development undertaken by the brightest minds in computer vision, deep learning, IoT sensing and behavioural analytics. We will talk about our 4 year journey from an idea on paper to fully commercialised products in the applied AI-powered vision systems, and the challenges we faced along the way.
Join the EmbeddedAI Group (https://www.dhirubhai.net/groups/13543723/) to see information related EdgeAI and contribute to growing the AIoT sector.