Wednesday, November 21, 2018

IoT & Intelligent Edge on drone : Under the hood & Bill of Material

DISCLAIMER : No NDA algorithms of AI will be found here. It is up to everyone to create its own ones. Appart from that, you will find the  ingredients needed to have an industrial AI running on the Edge ! After a period of R&D starting in 2016, this project was OpenSourced in Jan 2018 when Scott Guthrie arrived in Paris for a Redshirt Tour 18 (can be found here, even if our doc is partially obsolete due to technology evolution : https://github.com/azugfr/RedShirtTour-IoT-Edge-AI-Lab)

NEXT : I will carry on adding more info progressively in this article.

NEXT : I will explain how to get an effective Keynote that involves Julia White – Vice President Corporate Microsoft Azure worldwide.

NEXT : Our VISEO team will share as Open Source on GITHUB the code that is NOT under NDA.



INTRODUCTION

Context : During the Keynote of Julia White at Microsoft Experiences 2018(*), with Guilhem Villemin, I had the opportunity to illustrate Julia’s point regarding IoT, IoT Edge and AI = Intelligent Edge : https://www.youtube.com/watch?v=S3ZPtwAPG8Q&t=92m28s&feature=youtu.be 

(*) Largest European Microsoft event with 20000 people registered, over 150000 connected on live TV, plus many thousands on replay

 image image

I was acting as a Microsoft Regional Director (“RD program” known worldwide by Microsoft), and an Azure MVP, working as a FTE for VISEO.

image image


Architectural overview

Designed in 2017 and OpenSourced in Jan 2018 – for Scott Guthrie – RedShirt Tour Labs

image  Vincent Thavonekham & Artem Sheiko next to THE MAN, Scott Guthrie

OpenSourced Labs: https://github.com/azugfr/RedShirtTour-IoT-Edge-AI-Lab


To go deeper into the topic:



BOM – Bill of Material

  • Team : You need a solid Rambo Team, that will succeed in due time no matter how hard the work is, and a great customer ALTAMETRIS (100% spin-of the SNCF railway), and Craftsman (well, here Craftswoman – Sacha Lhopital that code in a CleanCode manner and with AI expertise), and Artem Sheiko – who I am referring to during the Keynote, our Data Scientist.


  • Massive amount of data
    • Many thousands of images and video as a DataSet per AI to be created
      image
    • Many real devices on the top of photo & video


  • Azure Subscription
    • To orchestrate DevOps deployment of the various Docker containers : modules Azure IoT Hub > IoT Edge
    • To centralize the source code : Azure DevOps CI/CD + GIT
    • To run Azure Custom Vision : https://www.customvision.ai (free trial)


  • DRONE : a real industrial drone running on Linux
    • Either this one (worth > 1/2 million $) ; if you are one of the 10 people in the world who owns that monster. Or the one guy in France : Nicolas Pollet the CEO of ALTAMETRIS below.
      image
    • Or, the one on stage (worth > 20 000$ to 40 000$)
      image
    • Alternatively, use another drone without Linux, but add an external “box” running Linux in parallel (could a Raspberry PI, but your AI has to be very low CPU demanding), but you will not have the ability to get the Drone’s telemetry (coordinate, orientation, battery level, …)



ON THE EDGE SIDE : Microsoft IoT Edge

  • Igor Leontiev created 3 ADVANTECH IoT Boxes :
  1. One for CI/CD DevOps, and advanced debugging tools and GUI
  2. Another one with a home-made version of a Linux distribution :
    • no GUI Linux for sparing resources for the AI compute
      image
    • already packaged an IoT Edge SDK
    • already packaged a Docker application
  3. A last box used as a Dev R&D box, to try many combinations of installations
    • version of Linux,
    • GUI vs no-GUI,
    • Version of camera and tricky drivers for Linux combined with the fact that we are in a Docker Container
    • Type of camera : USB vs. RJ45 vs. Wifi
    • type of AI (many versions of TensorFlow, OpenCV …),
    • Power needed in terms of CPU to run YOLO libraries,
  • Last but not least Igor also created a streaming server so that the OUTPUT of the camera is not the “barebone RAW” image captured, but a version enhanced with what the AI has been detected as object / defect. With possible output :
    • VGA cable (via an tiny and fragile adapter from Drone=> VGA
    • RJ45
    • Wifi

For the demo, we used RJ45 as a plan A, and a VGA as plan B just in case.
By experience, the organizers of MS Experiences preferred to ban any Wifi. Luckily, we has 3 possibilities


VISEO & IMAGE ACQUISITION

Similarly to a real professional camera, it is not a regular camera that we are using, but a camera composed into two distinct pieces. One coming from Germany, and the other one imported from Japan.

  1. An industrial Sensor, of brand iDS
    sensor_iDS
  2. A professional lens
    Lens
  3. All combined
    image
  4. Installing the drivers – testing on Windows, then on Linux, then Docker+Linux !
    image
  5. once deployed in the Drone
    https://twitter.com/VISEOGroup/status/1060129485717209088
    image

    image




HOW TO CREATE THE AI “home-made” ?

This is another story that needs a white paper itself. Without disclosing any NDA information, be armed with :

  • Have a True DataScientific (i.e. not someone that vaguely did a maths and statistics
  • Follow a very strict  and robust model (used for over 20 years in Data Mining) => CRISP :
  • be armed with a lot of patience : hours, days, nights !!!
    training, and re-training, and re-training, and re-training, and re-training, and re-training…. the AI model, up until you get an adequate one
    image image
  • some more hours, days, nights assessing the model on Linux – Here Artem Sheiko, the master chef of this AI
    image
  • And lots of coffee, next to a skeptical customer top expert in Railway field that will push us to the limits !!
    image


APPENDICES