|See also||Fifo, Fifo.read_elem(), Graph.queue_inference_with_fifo_elem()|
This method writes an element to the Fifo, usually the input tensor for an inference.
After the tensor data is written to the Fifo, an inference can be queued with Graph.queue_inference(). Alternatively, Graph.queue_inference_with_fifo_elem() can be used to write the tensor to the input Fifo and queue the inference in one call.
|input_tensor||numpy.ndarray||Input tensor data of the type specified by the FifoDataType option. This data is typically a representation of each color channel in each pixel of an image.|
|user_obj||any||User-defined data that will be returned along with the inference result. This can be anything that you want associated with the inference result, such as the original inference input or a window handle, or can be None.|
Exception with a status code from Status if underlying function calls return a status other than Status.OK.
- The FifoType set during initialization must allow write access for the API.
- The Fifo cannot be written to or read from until it is allocated with Graph.allocate_with_fifos() or Fifo.allocate().
- This is a blocking call if FifoOption.RW_DONT_BLOCK is false. If the Fifo is full this method will not return until there is space to successfully write.
from mvnc import mvncapi # # Open a Device, create a Graph, and load graph data from file... # # Allocate the Graph and create and allocate two associate Fifos for input and output input_fifo, output_fifo = graph.allocate_with_fifos(device, graph_buffer) # # Get an input tensor and do pre-processing # # Write the input tensor to the Fifo input_fifo.write_elem(input_tensor, 'input1') # # Queue an inference with Graph.queue_inference(), read the result and do something with it... # # Destroy the Fifos input_fifo.destroy() output_fifo.destroy() # # Perform other clean up... #