Reliable and scalable Internet of Things (IoT) solutions have become more accessible and widespread with the development of resources for edge computing, such as Intel Edge Software Hub, ADLink’s Hardware and Amazon Web Services (AWS). To accelerate the use cases of such resources, the June IoT North webinar featured a talk by Vibhu Bithar, Sr. Solution Architect (AI, ML, IoT), and Devang Aggarwal, Product Analyst (AI and IoT), at Intel Corporation, who shared their experience on how to make the computer vision I-2-I (ideation-to-inception) journey easy. Vibhu has been involved in the development of IoT applications to solve complex business problems for over 7 years. Devang is working on cloud developer adoption of Intel Distribution of the OpenVINO toolkit with CSPs. They talked about how to create a headless system that can take an RTSP feed from a capable drone or an action camera and perform inference on the edge. The webinar was co-hosted, as usual, by Pitch-In Project and sponsored by Newcastle University and https://www.goto50.ai.
Vibhu Bithar and Devang Aggarwal walked through the main prerequisites and steps in the development of a computer vision solution. The five essentials to make the journey happen are 1) an edge compute instance with high-speed inference capability (e.g.VizAI Adlink), 2) a video source (a drone or surveillance feed), 3) Cloud-ready environment setup, 4) setup of headless ready on boot system, and 5) model training. Having an edge device and a video source ready, the developers embarked on building a cloud-to-edge pipeline. However, the development of the pipeline using OpenVINO can result in an ecosystem that is too complicated. To simplify things and focus on a business case rather than technical intricacies, Vibhu worked with the Edge Software Hub team within Intel to build the simplified deployment process for faster inferencing on Intel Hardware. The development process started from downloading the part from Edge Software Hub RI, which enabled them to easily set up Edge device and Cloud, and then deploy the solution on edge. The next step was setting up a headless inferencing unit. For that, VIZIAi Devkit was used to which 2 Wifi Dongles were added. They installed NGIX-RTMP, and the code for RTMP feed processing, which made it possible to have live streaming of data from the drone. After inferencing on the edge and running the OpenVINO system, the next step was to train the model and then deploy it on the edge. In terms of data for model training, there could be publicly available resources that are applicable to the use case. Otherwise, the data should be collected by developers from the site. The accelerators built by Vibhu and Devang for training models consisted of three foundational utilities: optimisation, benchmarking and deployment. Specifically, to make the model optimisation easy, they have developed a python function, which simplified and implemented inline model conversion using the OpenVINO toolkit Model Optimizer. With OpenVINO IR conversion it was possible to write inference code once and then use models from different frameworks in IR format. Benchmarking was implemented in Intel DevCloud. The developer can take OpenVINO IR models right from an S3 bucket and benchmark them in just one click using the sample Jupyter notebook provided. To fine tune the model, a video was taken and annotated. The sample notebook was then fine tuned with OpenVINO IR Conversion in Amazon SageMaker. For the deployment of inference application, Intel Edge Software Hub was used, hosting multiple use cases and features, such as AWS IoT Greengrass Lambda for image classification and object detection. This feature helps developers deploy applications to multiple edge devices.