Graph.queue_inference_with_fifo_elem()

Info Value
Package mvnc
Module mvncapi
Version 2.0
See also Graph, Graph.queue_inference()
Fifo, Fifo.write_elem()

Overview

This method writes input tensor data to an input Fifo and then queues an inference.

Syntax

graph.queue_inference_with_fifo_elem(input_fifo, output_fifo, input_tensor, user_obj)

Parameters

Parameter Type Description
input_fifo Fifo A Fifo queue for graph inputs. The FifoState must be ALLOCATED.
output_fifo Fifo A Fifo queue for graph outputs. The FifoState must be ALLOCATED.
input_tensor numpy.ndarray Input tensor data of the type specified by the FifoDataType option.
user_obj any User-defined data that will be returned along with the inference result. This can be anything that you want associated with the inference result, such as the original inference input or a window handle, or None.

Return

None

Raises

Exception with a status code from Status if underlying function calls return a status other than Status.OK.

Notes

Example

from mvnc import mvncapi

#
# Create and open a Device...
#

# Create a Graph
graph = mvncapi.Graph('graph1')

# Read a compiled network graph from file (set the graph_filepath correctly for your graph file)
graph_filepath = './graph'
with open(graph_filepath, 'rb') as f:
    graph_buffer = f.read()

# Allocate the graph on the device and create input and output Fifos
input_fifo, output_fifo = graph.allocate_with_fifos(device, graph_buffer)

# Write the input to the input_fifo buffer and queue an inference in one call
graph.queue_inference_with_fifo_elem(input_fifo, output_fifo, input_tensor, None, 'object1')

#
# Read the output from the output_fifo and use it as needed...
#

# Deallocate and destroy the fifo and graph handles, close the device, and destroy the device handle
input_fifo.destroy()
output_fifo.destroy()
graph.destroy()
device.close()
device.destroy()