onnx_integ.rst 2.8 KB
Newer Older
1
.. frameworks/onnx_integ.rst:
2

3 4 5
ONNX Support
============

Leona C's avatar
Leona C committed
6
nGraph is able to import and execute ONNX models. Models are converted to 
7 8
nGraph's :abbr:`Intermediate Representation (IR)` and converted to ``Function`` 
objects, which can be compiled and executed with nGraph backends.
9

Leona C's avatar
Leona C committed
10
You can use nGraph's Python API to run an ONNX model and nGraph can be used 
11
as a backend to ONNX with the add-on package `nGraph ONNX`_.
Leona C's avatar
Leona C committed
12 13 14


.. note:: In order to support ONNX, nGraph must be built with the 
15 16 17
   ``NGRAPH_ONNX_IMPORT_ENABLE`` flag. See `Building nGraph-ONNX`_ for more 
   information. All nGraph packages published on PyPI are built with ONNX 
   support.
18 19 20 21 22


Importing an ONNX model
-----------------------

Leona C's avatar
Leona C committed
23
You can download models from the `ONNX Model Zoo`_. For example, ResNet-50:
24 25 26 27 28 29 30

::

    $ wget https://s3.amazonaws.com/download.onnx/models/opset_9/resnet50.tar.gz
    $ tar -xzvf resnet50.tar.gz


Leona C's avatar
Leona C committed
31 32
Use the following Python commands to convert the downloaded model to an nGraph 
``Function``:
33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48

.. code-block:: python

    # Import ONNX and load an ONNX file from disk
    >>> import onnx
    >>> onnx_protobuf = onnx.load('resnet50/model.onnx')

    # Convert ONNX model to an ngraph model
    >>> from ngraph.impl.onnx_import import import_onnx_model
    >>> ng_function = import_onnx_model(onnx_protobuf.SerializeToString())

    # The importer returns a list of ngraph models for every ONNX graph output:
    >>> print(ng_function)
    <Function: 'resnet50' ([1, 1000])>


Leona C's avatar
Leona C committed
49 50
This creates an nGraph ``Function`` object, which can be used to execute a 
computation on a chosen backend.
51 52 53 54

Running a computation
---------------------

Leona C's avatar
Leona C committed
55 56 57 58
You can now create an nGraph ``Runtime`` backend and use it to compile your 
``Function`` to a backend-specific ``Computation`` object. Finally, you can 
execute your model by calling the created ``Computation`` object with input 
data:
59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79

.. code-block:: python

    # Using an nGraph runtime (CPU backend) create a callable computation object
    >>> import ngraph as ng
    >>> runtime = ng.runtime(backend_name='CPU')
    >>> resnet_on_cpu = runtime.computation(ng_function)
    >>> print(resnet_on_cpu)
    <Computation: resnet50(Parameter_269)>

    # Load an image (or create a mock as in this example)
    >>> import numpy as np
    >>> picture = np.ones([1, 3, 224, 224], dtype=np.float32)

    # Run computation on the picture:
    >>> resnet_on_cpu(picture)
    [array([[2.16105007e-04, 5.58412226e-04, 9.70510227e-05, 5.76671446e-05,
             7.45318757e-05, 4.80892748e-04, 5.67404088e-04, 9.48728994e-05,
             ...


Leona C's avatar
Leona C committed
80
Find more information about nGraph and ONNX in the 
81
`nGraph ONNX`_ GitHub\* repository.
82 83


Leona C's avatar
Leona C committed
84
.. _ngraph ONNX: https://github.com/NervanaSystems/ngraph-onnx
85
.. _Building nGraph-ONNX: https://github.com/NervanaSystems/ngraph-onnx/blob/master/BUILDING.md
Leona C's avatar
Leona C committed
86
.. _ONNX model zoo: https://github.com/onnx/models