Commit 90aa7336 authored by harryskim's avatar harryskim Committed by Scott Cyphers

Harryk remove winml ref (#2204)

* Removed winml from stack diagram

* Removed winml from full stack diagram

* Update README.md

* update the diagram without winml

* Changed sentence about WinML

* Removed duplication
parent fcdfc4ce
......@@ -117,7 +117,7 @@ release of nGraph. nGraph currently has limited support for dynamic graphs.
Current nGraph Compiler full stack
----------------------------------
![](doc/sphinx/source/graphics/full-ngstck.png)
![](doc/sphinx/source/graphics/about_fullstack.png)
In addition to IA and NNP transformers, nGraph Compiler stack has transformers
......
......@@ -93,7 +93,7 @@ to improve it:
[contrib guide]: https://ngraph.nervanasys.com/docs/latest/project/code-contributor-README.html
[pull request]: https://github.com/NervanaSystems/ngraph/pulls
[how to import]: https://ngraph.nervanasys.com/docs/latest/howto/import.html
[ngraph_wireframes_with_notice]: doc/sphinx/source/graphics/ngraph_wireframes_with_notice_updated.png "nGraph wireframe"
[ngraph_wireframes_with_notice]: doc/sphinx/source/graphics/readme_stack.png "nGraph wireframe"
[ngraph-compiler-stack-readme]: doc/sphinx/source/graphics/ngraph-compiler-stack-readme.png "nGraph Compiler Stack"
[build-status]: https://travis-ci.org/NervanaSystems/ngraph/branches
[build-status-badge]: https://travis-ci.org/NervanaSystems/ngraph.svg?branch=master
......
......@@ -15,9 +15,8 @@ DNN (Deep Neural Network) model can use nGraph to bypass significant
framework-based complexity and [import it] to test or run on targeted and
efficient backends with our user-friendly Python-based API.
nGraph is also integrated as an computation provider for [ONNX Runtime],
which is a runtime for [WinML] on Windows OS and Azure to accelerate DL
workloads.
nGraph is also integrated as an execution provider for [ONNX Runtime],
which is the first publicably available inference engine for ONNX.
The table below summarizes our current progress on supported frameworks.
If you are an architect of a framework wishing to take advantage of speed
......@@ -29,7 +28,7 @@ and multi-device support of nGraph Compiler, please refer to [Framework integrat
| TensorFlow* 1.12 | :heavy_check_mark: | :heavy_check_mark:
| MXNet* 1.3 | :heavy_check_mark: | :heavy_check_mark:
| ONNX 1.3 | :heavy_check_mark: | :heavy_check_mark:
| ONNX Runtime Functional | Functional | No
| ONNX Runtime | Functional | No
| PyTorch (via ONNXIFI) | Functional | No
| PaddlePaddle | Functional | No
......@@ -72,7 +71,7 @@ stack, and early adopters will be able test them in 2019.
[Upcoming DL accelerators]: https://www.intel.com/content/dam/www/public/us/en/documents/product-briefs/vision-accelerator-design-product-brief.pdf
[import it]: http://ngraph.nervanasys.com/docs/latest/howto/import.html
[ONNXIFI]: https://github.com/onnx/onnx/blob/master/docs/ONNXIFI.md
[ONNX Runtime]:https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-build-deploy-onnx
[ONNX Runtime]: https://azure.microsoft.com/en-us/blog/onnx-runtime-is-now-open-source/
[WinML]: http://docs.microsoft.com/en-us/windows/ai
[How to]: https://ngraph.nervanasys.com/docs/latest/howto/index.html
[Framework integration guide]: https://ngraph.nervanasys.com/docs/latest/frameworks/index.html
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment