ABOUT.md 5.49 KB
Newer Older
1 2 3 4 5 6
About nGraph Compiler stack
===========================

nGraph Compiler stack architecture
----------------------------------

7 8 9 10
The diagram below represents our current Beta release stack.
In the diagram, nGraph components are colored in gray. Please note
that the stack diagram is simplified to show how nGraph executes deep
learning workloads with two hardware backends; however, many other
11 12
deep learning frameworks and backends currently are functioning.

Leona C's avatar
Leona C committed
13
![](doc/sphinx/source/graphics/arch_simple_pad.png)
14

15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75
#### Bridge

Starting from the top of the stack, nGraph receives a computational graph
from a deep learning framework such as TensorFlow* or MXNet*. The
computational graph is converted to an nGraph internal representation
by a bridge created for the corresponding framework.

An nGraph bridge examines the whole graph to pattern match subgraphs
which nGraph knows how to execute, and these subgraphs are encapsulated.
Parts of the graph that are not encapsulated will default to framework
implementation when executed.

#### nGraph Core

nGraph uses a strongly-typed and platform-neutral
`Intermediate Representation (IR)` to construct a "stateless"
computational graph. Each node, or op, in the graph corresponds to
one `step` in a computation, where each step produces zero or
more tensor outputs from zero or more tensor inputs.

This allows nGraph to apply its state of the art optimizations instead
of having to follow how a particular framework implements op execution,
memory management, data layouts, etc.

In addition, using nGraph IR allows faster optimization delivery
for many of the supported frameworks. For example, if nGraph optimizes
ResNet* for TensorFlow*, the same optimization can be readily applied
to MXNet* or ONNX* implementations of ResNet*.

#### Hybrid Transformer

Hybrid transformer takes the nGraph IR, and partitions it into
subgraphs, which can then be assigned to the best-performing backend.
There are two hardware backends shown in the stack diagram to demonstrate
this graph partitioning. The Hybrid transformer assigns complex operations
(subgraphs) to Intel® Nervana™ Neural Network Processor (NNP) to expedite the
computation, and the remaining operations default to CPU. In the future,
we will further expand the capabilities of Hybrid transformer
by enabling more features, such as localized cost modeling and memory
sharing.

Once the subgraphs are assigned, the corresponding backend will
execute the IR.


#### Backends

Focusing our attention on the CPU backend, when the IR is passed to
the Intel® Architecture (IA) transformer, it can be executed in two modes:
Direct EXecution (DEX) and code generation (`codegen`).

In `codegen` mode, nGraph generates and compiles code which can
either call into highly optimized kernels like MKL-DNN or JITers like Halide.
Although our team wrote kernels for nGraph for some operations,
nGraph leverages existing kernel libraries such as MKL-DNN, Eigen, and MLSL.

MLSL library is called when nGraph executes distributed training.
At the time of the nGraph Beta release, nGraph achieved state of the art
results for ResNet50 with 16 nodes and 32 nodes for TensorFlow* and MXNet*.
We are excited to continue our work in enabling distributed training,
and we plan to expand to 256 nodes in Q4 ‘18. Additionally, we
76 77
are testing model parallelism in addition to data parallelism.

78 79 80 81 82 83 84 85 86 87
The other mode of execution is Direct EXecution (DEX). In DEX mode,
nGraph can execute the operations by directly calling associated kernels
as it walks though the IR instead of compiling via `codegen`. This mode
reduces the compilation time, and it will be useful for training,
deploying, and retraining a deep learning workload in production.
In our tests, DEX mode reduced ResNet50 compilation time by 30X.

nGraph further tries to speed up the computation by leveraging
multi-threading and graph scheduling libraries such as OpenMP and
TBB Flow Graph.
88 89 90 91

Features
--------

92 93
nGraph performs a combination of device-specific and
non-device-specific optimizations:
94 95

-   **Fusion** -- Fuse multiple ops to to decrease memory usage.
96 97
-   **Data layout abstraction** -- Make abstraction easier and faster
    with nGraph translating element order to work best for a given or
98
    available device.
99
-   **Data reuse** -- Save results and reuse for subgraphs with the
100
    same input.
101
-   **Graph scheduling** -- Run similar subgraphs in parallel via
102
    multi-threading.
103 104
-   **Graph partitioning** -- Partition subgraphs to run on different
    devices to speed up computation; make better use of spare CPU cycles
105
    with nGraph.
106
-   **Memory management** -- Prevent peak memory usage by intercepting
107 108
    a graph with or by a "saved checkpoint," and to enable data auditing.

109 110 111 112 113 114
Beta Limitations
----------------

In this Beta release, nGraph only supports Just In Time compilation,
but we plan to add support for Ahead of Time compilation in the official
release of nGraph. nGraph currently has limited support for dynamic graphs.
115 116 117 118

Current nGraph Compiler full stack
----------------------------------

Leona C's avatar
Leona C committed
119
![](doc/sphinx/source/graphics/arch_complex.png)
120 121 122


In addition to IA and NNP transformers, nGraph Compiler stack has transformers
123 124 125 126 127 128
for multiple GPU types and an upcoming Intel deep learning accelerator. To
support the growing number of transformers, we plan to expand the capabilities
of the hybrid transformer with a cost model and memory sharing. With these new
features, even if nGraph has multiple backends targeting the same hardware, it
will partition the graph into multiple subgraphs and determine the best way to
execute each subgraph.