Optimize your network inference time with OpenVINO
Adrian Boguszewski
You’ve already trained your great neural network. It reaches 99.9% of accuracy and saves the world. You would like to deploy it. However, you don’t have a server with expensive discrete GPUs. Moreover, you don’t want to build an API. After all, you are a Data Scientist, not a Web Developer… So, is it possible to automatically optimize and run the network on both CPU and iGPU you have already? Let’s check! During the talk, I'll present the OpenVINO™ Toolkit. You'll learn how to automatically convert the model using Model Optimizer and how to run the inference with OpenVINO Runtime. The magic with only a few lines of code. After all, you'll get a step-by-step jupyter notebook, so you can try it at home.
Adrian Boguszewski
Affiliation: Intel
AI Software Evangelist at Intel. Adrian graduated from the Gdansk University of Technology in the field of Computer Science 5 years ago. After that, he started his career in computer vision and deep learning. For the last two years, as a team leader of data scientists and Android developers, Adrian was responsible for an application to take a professional photo (for an ID card or passport) without leaving home. He is a co-author of the LandCover.ai dataset and he was teaching people how to do deep learning. His current role is to educate people about OpenVINO Toolkit. In his free time, he’s a traveler. You can also talk with him about finance, especially savings and investments.