Graph.queue_inference_with_fifo_elem()
Info | Value |
---|---|
Package | mvnc |
Module | mvncapi |
Version | 2.0 |
See also | Graph, Graph.queue_inference() Fifo, Fifo.write_elem() |
Overview
This method writes input tensor data to an input Fifo and then queues an inference.
Syntax
graph.queue_inference_with_fifo_elem(input_fifo, output_fifo, input_tensor, user_obj)
Parameters
Parameter | Type | Description |
---|---|---|
input_fifo | Fifo | A Fifo queue for graph inputs. The FifoState must be ALLOCATED. |
output_fifo | Fifo | A Fifo queue for graph outputs. The FifoState must be ALLOCATED. |
input_tensor | numpy.ndarray | Input tensor data of the type specified by the FifoDataType option. |
user_obj | any | User-defined data that will be returned along with the inference result. This can be anything that you want associated with the inference result, such as the original inference input or a window handle, or None. |
Return
None
Raises
Exception with a status code from Status if underlying function calls return a status other than Status.OK.
Notes
- The GraphState must be ALLOCATED.
- This method takes the place of explicit calls to Fifo.write_elem() and Graph.queue_inference().
- The input_fifo FifoType must allow write access for the API and the output_fifo must allow read access for the API.
- Fifo.write_elem(), which is called internally by this method, is a blocking call if FifoOption.RW_DONT_BLOCK is false. If the Fifo is full this method will not return until there is space to write to the Fifo.
- The Fifo’s capacity is set when you create the Fifo with Graph.allocate_with_fifos() or Fifo.allocate().
- You can check the capacity and the current fill level of the Fifo with Fifo.get_option() for FifoOption.RO_CAPACITY and FifoOption.RO_WRITE_FILL_LEVEL.
Example
from mvnc import mvncapi
#
# Create and open a Device...
#
# Create a Graph
graph = mvncapi.Graph('graph1')
# Read a compiled network graph from file (set the graph_filepath correctly for your graph file)
graph_filepath = './graph'
with open(graph_filepath, 'rb') as f:
graph_buffer = f.read()
# Allocate the graph on the device and create input and output Fifos
input_fifo, output_fifo = graph.allocate_with_fifos(device, graph_buffer)
# Write the input to the input_fifo buffer and queue an inference in one call
graph.queue_inference_with_fifo_elem(input_fifo, output_fifo, input_tensor, None, 'object1')
#
# Read the output from the output_fifo and use it as needed...
#
# Deallocate and destroy the fifo and graph handles, close the device, and destroy the device handle
input_fifo.destroy()
output_fifo.destroy()
graph.destroy()
device.close()
device.destroy()