• Michał Karzyński's avatar
    Add support for operator sets and Softmax:1 (#3420) · d218ccf9
    Michał Karzyński authored
    * Add opset_version field to Node
    
    * Add opset version aliases to Softmax
    
    * Add op::set1::Softmax operator
    
    * Disable opset 1 ops in INTERPRETER
    
    * Add serializer support for Softmax opset 1
    
    * Opset1Transformation pass
    
    * Added unit tests to softmax pass
    
    * Code refactoring
    
    * Added missing virtual to set_opset_version
    
    * Clang styles applied
    
    * Update src/ngraph/pass/opset1_transform.cpp
    Co-Authored-By: 's avatarAdam Procter <adam.m.procter@intel.com>
    
    * Part.1 Code review remarks introduced
    
    * Part.2 Code review remarks introduced
    
    * Changed opset_version to op_version
    
    * Code review remarks introduced
    
    * Code review remarks introduced
    
    * Set Op as base class for Softmax instead of UnaryElementwiseArithmetic
    
    * Fixed unit tests
    
    * v1::Softmax::generate_adjoints mark temporarily as not supported
    
    * Fix CI. Part.2
    
    * Fix CI. Part.3
    
    * Code review remarks introduced
    
    * Rename Opset1Transformation to Opset1Upgrade
    
    * Fixed clag style problem with enum switch
    
    * Fixes clang compilator error
    
    * Removed unused foward declaration
    
    * Code review remarks introduced
    
    * Added checking if input rank is static
    d218ccf9

nGraph Compiler stack

nGraph is an open-source graph compiler for Artificial Neural Networks (ANNs). The nGraph Compiler stack provides an inherently efficient graph-based compilation infrastructure designed to be compatible with the many upcoming processors, like the Intel Nervana:tm: Neural Network Processor (Intel:registered: Nervana:tm: NNP), while also unlocking a massive performance boost on any existing hardware targets in your neural network: both GPUs and CPUs. Using its flexible infrastructure, you will find it becomes much easier to create Deep Learning (DL) models that can adhere to the "write once, run anywhere" mantra that enables your AI solutions to easily go from concept to production to scale.

Frameworks using nGraph to execute workloads have shown up to 45X performance boost compared to native implementations.

Using the Python API

nGraph can be used directly with the Python API described here, or with the C++ API described in the core documentation. Alternatively, its performance benefits can be realized through frontends such as TensorFlow, PaddlePaddle and ONNX. You can also create your own custom framework to integrate directly with the nGraph Ops for highly-targeted graph execution.

Installation

nGraph is available as binary wheels you can install from PyPI. nGraph binary wheels are currently tested on Ubuntu 16.04. To build and test on other systems, you may want to try building from sources.

Installing nGraph Python API from PyPI is easy:

pip install ngraph-core

Usage example

Using nGraph's Python API to construct a computation graph and execute a computation is simple. The following example shows how to create a minimal (A + B) * C computation graph and calculate a result using 3 numpy arrays as input.

import numpy as np
import ngraph as ng

A = ng.parameter(shape=[2, 2], name='A', dtype=np.float32)
B = ng.parameter(shape=[2, 2], name='B', dtype=np.float32)
C = ng.parameter(shape=[2, 2], name='C', dtype=np.float32)
# >>> print(A)
# <Parameter: 'A' ([2, 2], float)>

model = (A + B) * C
# >>> print(model)
# <Multiply: 'Multiply_14' ([2, 2])>

runtime = ng.runtime(backend_name='CPU')
# >>> print(runtime)
# <Runtime: Backend='CPU'>

computation = runtime.computation(model, A, B, C)
# >>> print(computation)
# <Computation: Multiply_14(A, B, C)>

value_a = np.array([[1, 2], [3, 4]], dtype=np.float32)
value_b = np.array([[5, 6], [7, 8]], dtype=np.float32)
value_c = np.array([[9, 10], [11, 12]], dtype=np.float32)

result = computation(value_a, value_b, value_c)
# >>> print(result)
# [[ 54.  80.]
#  [110. 144.]]

print('Result = ', result)