Add pre-trained TensorFlow Models to AWS DeepLens

First download one of the pre-trained TensorFlow models from the tensorflow detection model zoo github repository.

For instance the ssd_mobilenet_v1_coco (~190MB) for a model based on the coco image set which can detect 80 objects, from tennis rackets to microwave ovens.

Extract the contents of the archive and copy just the file frozen_inference_graph.pb to a new empty folder. This is the pre-trained model we need to deploy to the DeepLens. 

Also download the corresponding object detection label map from here. For the coco image set download mscoco_label_map.pbtxt 

Copy this pbtxt file to the folder where you stored the frozen_inference_graph.pb in the previous step.

Now create a compressed tar gzip file containing the model and the label map. 

$ tar -czvf tfmodel.tar.gz *.pb *.pbtxt 
											

Note: if you're a windows user, you can use 7-zip instead.

Upload the compressed file tfmodel.tar.gz to an S3 bucket. The bucket name must contain "deeplens".

Import the model

After uploading to S3, open the AWS DeepLens web console, navigate to Models and click "Import model":

Screenshot click "Import model"
Screenshot click "Import model"

Select "externally trained model", enter the s3:// url pointing to the model you uploaded to the S3 bucket, select TensorFlow as framework and then click "import model" to save:

Select "externally trained model"
Select "externally trained model"

Create the lambda function

Open AWS Lambda web console and create a new lambda function. Search for the greengrass-hello-world blueprint (the python one, not the node.js!)

Screenshot  create a new lambda function
Screenshot create a new lambda function

Enter a name for the function and select the existing DeepLens role from the dropdown:

Enter a name and select the role
Enter a name and select the role

Edit the function and replace the contents of the file greengrassHelloWorld.py

# Import packages
print("greengrass lambda starting");
import os
import cv2
import numpy as np
import tensorflow as tf
import sys
import greengrasssdk
from threading import Timer
import time
import awscam
from threading import Thread

# Creating a greengrass core sdk client
client = greengrasssdk.client('iot-data')

# The information exchanged between IoT and cloud has 
# a topic and a message body.
# This is the topic that this code uses to send messages to cloud
iotTopic = '$aws/things/{}/infer'.format(os.environ['AWS_IOT_THING_NAME'])
modelPath = "/opt/awscam/artifacts"

# Path to frozen detection graph .pb file, which contains the model that is used
# for object detection.
PATH_TO_CKPT = os.path.join(modelPath,'frozen_inference_graph.pb')

# List of the strings that is used to add correct label for each box.
PATH_TO_LABELS = os.path.join(modelPath, 'mscoco_label_map.pbtxt')

def greengrass_infinite_infer_run():
    try:
        # Load the TensorFlow model into memory.
        detection_graph = tf.Graph()
        with detection_graph.as_default():
            od_graph_def = tf.GraphDef()
            with tf.gfile.GFile(PATH_TO_CKPT, 'rb') as fid:
                serialized_graph = fid.read()
                od_graph_def.ParseFromString(serialized_graph)
                tf.import_graph_def(od_graph_def, name='')

            sess = tf.Session(graph=detection_graph)
            
        client.publish(topic=iotTopic, payload="Model loaded")
        
        tensor_dict = {}
        image_tensor = detection_graph.get_tensor_by_name('image_tensor:0')
        for key in ['num_detections', 'detection_boxes', 'detection_scores','detection_classes']:
            tensor_name = key + ':0'
            tensor_dict[key] = detection_graph.get_tensor_by_name(tensor_name)
        #load label map
        label_dict = {}
        with open(PATH_TO_LABELS, 'r') as f:
                id=""
                for l in (s.strip() for s in f):
                        if "id:" in l:
                                id = l.strip('id:').replace('\"', '').strip()
                                label_dict[id]=''
                        if "display_name:" in l:
                                label_dict[id] = l.strip('display_name:').replace('\"', '').strip()

        client.publish(topic=iotTopic, payload="Start inferencing")
        while True:
            ret, frame = awscam.getLastFrame()
            if ret == False:
                raise Exception("Failed to get frame from the stream")
            expanded_frame = np.expand_dims(frame, 0)
            # Perform the actual detection by running the model with the image as input
            output_dict = sess.run(tensor_dict, feed_dict={image_tensor: expanded_frame})
            scores = output_dict['detection_scores'][0]
            classes = output_dict['detection_classes'][0]
            #only want inferences that have a prediction score of 50% and higher
            msg = '{'
            for idx, val in enumerate(scores):
                if val > 0.5:
                    msg += '"{}": {:.2f},'.format(label_dict[str(int(classes[idx]))], val*100)
            msg = msg.rstrip(',')
            msg +='}'
            
            client.publish(topic=iotTopic, payload = msg)
            
    except Exception as e:
        msg = "Test failed: " + str(e)
        client.publish(topic=iotTopic, payload=msg)

    # Asynchronously schedule this function to be run again in 15 seconds
    Timer(15, greengrass_infinite_infer_run).start()


# Execute the function above
greengrass_infinite_infer_run()


# This is a dummy handler and will not be invoked
# Instead the code above will be executed in an infinite loop for our example
def function_handler(event, context):
    return

											

Save the lambda function and publish a new version using actions > publish new version! This is very important, as DeepLens will only use published versions.

 

Create a new DeepLens Project


Open the projects view of the DeepLens webconsole and create a new project:

Screenshot create a new project
Screenshot create a new project

Select "create new blank project" and assign a name to the project. 

Click on "add model" and select the model you created:

Screenshot "create new blank project"
Screenshot "create new blank project"

Click "add function" and select the lambda function you created in the previous step. 

After assigning model and function, you can save the project using the "create" button:

Save the project
Save the project

Time to start your DeepLens.

But before you can deploy the project to the DeepLens, you first have to install the TensorFlow library on the DeepLens! 

Open a terminal on your DeepLens Desktop or connect using SSH, then enter:

sudo pip2 install tensorflow==1.5.0
											

After installing tensorflow python libraries you can deploy the new project to the DeepLens in the web console.

Select the project from the project list and click "Deploy to device". 

Screenshot select the project
Screenshot select the project

When you have successfully deployed the project onto your DeepLens, the device will report detected objects to the AWS IoT Hub as MQTT messages, whenever an object has been detected in front of the camera. 

Note: This example is "headless" and will NOT draw bounding boxes into the project video stream as in the object detection sample project! Drawing the bounding boxes into the live stream needs a lot of cpu power and noticeably slows down the DeepLens, that's why the object detection sample project is so sluggish. If you want bounding boxes in the live stream, integrate the "cv2" python code from the sample project to render boxes to the output stream. 

To see the output of the camera (detected objects, not video stream!), copy the topic of the DeepLens device, which can be found in the Project output section of the DeepLens device page:

Screenshot copy the topic of the DeepLens device
Screenshot copy the topic of the DeepLens device

Open AWS IoT console and subscribe to this topic to see the output of the camera:

Subscribe the topic to see the output of the camera
Subscribe the topic to see the output of the camera

Now you should see the live output of objects and probability detected by the camera:

Screenshot live output
Screenshot live output
Kommentar einfügen: