Commit b39d0aab authored by Leona C's avatar Leona C Committed by Scott Cyphers

Latest illustrations of nG components and features (#3617)

* Latest illustrations of nG components and features

* Update ABOUT.md

* Update full stack diagram

* fix typo

* Updating as per discussion
parent c1abb9a0
...@@ -4,18 +4,19 @@ About nGraph Compiler stack ...@@ -4,18 +4,19 @@ About nGraph Compiler stack
nGraph Compiler stack architecture nGraph Compiler stack architecture
---------------------------------- ----------------------------------
The diagram below represents our current Beta release stack. The diagram below represents our current release stack. In the diagram,
In the diagram, nGraph components are colored in gray. Please note nGraph components are colored in gray. Please note
that the stack diagram is simplified to show how nGraph executes deep that the stack diagram is simplified to show how nGraph executes deep
learning workloads with two hardware backends; however, many other learning workloads with two hardware backends; however, many other
deep learning frameworks and backends currently are functioning. deep learning frameworks and backends currently are functioning.
![](doc/sphinx/source/graphics/arch_simple_pad.png) ![](doc/sphinx/source/graphics/ngraph_arch_diag.png)
#### Bridge
## Bridge
Starting from the top of the stack, nGraph receives a computational graph Starting from the top of the stack, nGraph receives a computational graph
from a deep learning framework such as TensorFlow* or MXNet*. The from a deep learning framework such as TensorFlow or MXNet. The
computational graph is converted to an nGraph internal representation computational graph is converted to an nGraph internal representation
by a bridge created for the corresponding framework. by a bridge created for the corresponding framework.
...@@ -24,7 +25,8 @@ which nGraph knows how to execute, and these subgraphs are encapsulated. ...@@ -24,7 +25,8 @@ which nGraph knows how to execute, and these subgraphs are encapsulated.
Parts of the graph that are not encapsulated will default to framework Parts of the graph that are not encapsulated will default to framework
implementation when executed. implementation when executed.
#### nGraph Core
## nGraph Core
nGraph uses a strongly-typed and platform-neutral nGraph uses a strongly-typed and platform-neutral
`Intermediate Representation (IR)` to construct a "stateless" `Intermediate Representation (IR)` to construct a "stateless"
...@@ -38,10 +40,11 @@ memory management, data layouts, etc. ...@@ -38,10 +40,11 @@ memory management, data layouts, etc.
In addition, using nGraph IR allows faster optimization delivery In addition, using nGraph IR allows faster optimization delivery
for many of the supported frameworks. For example, if nGraph optimizes for many of the supported frameworks. For example, if nGraph optimizes
ResNet* for TensorFlow*, the same optimization can be readily applied ResNet for TensorFlow, the same optimization can be readily applied
to MXNet* or ONNX* implementations of ResNet*. to MXNet* or ONNX* implementations of ResNet.
#### Hybrid Transformer ## Hybrid Transformer
Hybrid transformer takes the nGraph IR, and partitions it into Hybrid transformer takes the nGraph IR, and partitions it into
subgraphs, which can then be assigned to the best-performing backend. subgraphs, which can then be assigned to the best-performing backend.
...@@ -57,7 +60,7 @@ Once the subgraphs are assigned, the corresponding backend will ...@@ -57,7 +60,7 @@ Once the subgraphs are assigned, the corresponding backend will
execute the IR. execute the IR.
#### Backends ## Backends
Focusing our attention on the CPU backend, when the IR is passed to Focusing our attention on the CPU backend, when the IR is passed to
the Intel® Architecture (IA) transformer, it can be executed in two modes: the Intel® Architecture (IA) transformer, it can be executed in two modes:
...@@ -70,10 +73,7 @@ nGraph leverages existing kernel libraries such as MKL-DNN, Eigen, and MLSL. ...@@ -70,10 +73,7 @@ nGraph leverages existing kernel libraries such as MKL-DNN, Eigen, and MLSL.
MLSL library is called when nGraph executes distributed training. MLSL library is called when nGraph executes distributed training.
At the time of the nGraph Beta release, nGraph achieved state of the art At the time of the nGraph Beta release, nGraph achieved state of the art
results for ResNet50 with 16 nodes and 32 nodes for TensorFlow* and MXNet*. results for ResNet50 with 16 nodes and 32 nodes for TensorFlow and MXNet.
We are excited to continue our work in enabling distributed training,
and we plan to expand to 256 nodes in Q4 ‘18. Additionally, we
are testing model parallelism in addition to data parallelism.
The other mode of execution is Direct EXecution (DEX). In DEX mode, The other mode of execution is Direct EXecution (DEX). In DEX mode,
nGraph can execute the operations by directly calling associated kernels nGraph can execute the operations by directly calling associated kernels
...@@ -106,23 +106,16 @@ non-device-specific optimizations: ...@@ -106,23 +106,16 @@ non-device-specific optimizations:
- **Memory management** -- Prevent peak memory usage by intercepting - **Memory management** -- Prevent peak memory usage by intercepting
a graph with or by a "saved checkpoint," and to enable data auditing. a graph with or by a "saved checkpoint," and to enable data auditing.
Beta Limitations Limitations
---------------- -----------
The Beta release of nGraph only supports Just-In-Time (JiT) compilation;
Ahead-of Time (AoT) compilation will be supported in the official release.
nGraph currently has limited support for dynamic shapes.
In this Beta release, nGraph only supports Just In Time compilation,
but we plan to add support for Ahead of Time compilation in the official
release of nGraph. nGraph currently has limited support for dynamic graphs.
Current nGraph Compiler full stack Current nGraph Compiler full stack
---------------------------------- ----------------------------------
![](doc/sphinx/source/graphics/arch_complex.png) ![](doc/sphinx/source/graphics/ngraph_full_stack_diagrams.png)
In addition to IA and NNP transformers, nGraph Compiler stack has transformers
for multiple GPU types and an upcoming Intel deep learning accelerator. To
support the growing number of transformers, we plan to expand the capabilities
of the hybrid transformer with a cost model and memory sharing. With these new
features, even if nGraph has multiple backends targeting the same hardware, it
will partition the graph into multiple subgraphs and determine the best way to
execute each subgraph.
...@@ -36,7 +36,7 @@ seen performance boosts running workloads that are not included on the list of ...@@ -36,7 +36,7 @@ seen performance boosts running workloads that are not included on the list of
Additionally we have integrated nGraph with [PlaidML] to provide deep learning Additionally we have integrated nGraph with [PlaidML] to provide deep learning
performance acceleration on Intel, nVidia, & AMD GPUs. More details on current performance acceleration on Intel, nVidia, & AMD GPUs. More details on current
architecture of the nGraph Compiler stack can be found in [Architecture and features], architecture of the nGraph Compiler stack can be found in [Architecture and features],
and recent changes to the stack are explained in [Release Notes]. and recent changes to the stack are explained in the [Release Notes].
## What is nGraph Compiler? ## What is nGraph Compiler?
...@@ -50,15 +50,9 @@ deep learning accelerators: Intel® Nervana™ Neural Network Processor for Lear ...@@ -50,15 +50,9 @@ deep learning accelerators: Intel® Nervana™ Neural Network Processor for Lear
Inference respectively. Future plans for supporting addtional deep learning frameworks Inference respectively. Future plans for supporting addtional deep learning frameworks
and backends are outlined in the [ecosystem] section. and backends are outlined in the [ecosystem] section.
![](doc/sphinx/source/graphics/ngpipelines.png) ![](doc/sphinx/source/graphics/nGraph_main.png)
While the ecosystem shown above is all functioning, we have validated
performance for deep learning inference on CPU processors, such as Intel® Xeon®
for the Beta release of nGraph. The Gold release is targeted for June 2019; it
will feature broader workload coverage including quantized graphs (int8) and
will implement support for dynamic shapes.
Our documentation has extensive information about how to use nGraph Compiler Our documentation has extensive information about how to use nGraph Compiler
stack to create an nGraph computational graph, integrate custom frameworks, stack to create an nGraph computational graph, integrate custom frameworks,
and to interact with supported backends. If you wish to contribute to the and to interact with supported backends. If you wish to contribute to the
...@@ -77,9 +71,10 @@ to improve it: ...@@ -77,9 +71,10 @@ to improve it:
* In the case of a larger feature, create a test. * In the case of a larger feature, create a test.
* Submit a [pull request]. * Submit a [pull request].
* Make sure your PR passes all CI tests. Note: You can test locally with `make check`. * Make sure your PR passes all CI tests. Note: You can test locally with `make check`.
* We will review your contribution and, if any additional fixes or
modifications are necessary, may provide feedback to guide you. When We will review your contribution and, if any additional fixes or modifications are
accepted, your pull request will be merged to the repository. necessary, may provide feedback to guide you. When accepted, your pull request will
be merged to the repository.
[Ecosystem]: ./ecosystem-overview.md [Ecosystem]: ./ecosystem-overview.md
...@@ -96,7 +91,7 @@ to improve it: ...@@ -96,7 +91,7 @@ to improve it:
[contrib guide]: https://www.ngraph.ai/documentation/contributing/guide [contrib guide]: https://www.ngraph.ai/documentation/contributing/guide
[pull request]: https://github.com/NervanaSystems/ngraph/pulls [pull request]: https://github.com/NervanaSystems/ngraph/pulls
[how to import]: https://www.ngraph.ai/tutorials/onnx-tutorial#import-a-model-with-onnx-and-ngraph [how to import]: https://www.ngraph.ai/tutorials/onnx-tutorial#import-a-model-with-onnx-and-ngraph
[ngraph_wireframes_with_notice]: doc/sphinx/source/graphics/ngpipelines.png "nGraph wireframe" [ngraph_wireframes_with_notice]: doc/sphinx/source/graphics/nGraph_main.png "nGraph components"
[build-status]: https://travis-ci.org/NervanaSystems/ngraph/branches [build-status]: https://travis-ci.org/NervanaSystems/ngraph/branches
[build-status-badge]: https://travis-ci.org/NervanaSystems/ngraph.svg?branch=master [build-status-badge]: https://travis-ci.org/NervanaSystems/ngraph.svg?branch=master
[PlaidML]: https://github.com/plaidml/plaidml [PlaidML]: https://github.com/plaidml/plaidml
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment