Unverified Commit c5d52f14 authored by Scott Cyphers's avatar Scott Cyphers Committed by GitHub

Migrate doc changes to r0.18 (#2738)

* Migrate doc changes

* Add TensorFlow version change
parent 55e1e17c
...@@ -14,7 +14,7 @@ workloads on CPU for inference, please refer to the links below. ...@@ -14,7 +14,7 @@ workloads on CPU for inference, please refer to the links below.
| Framework (Version) | Installation guide | Notes | Framework (Version) | Installation guide | Notes
|----------------------------|----------------------------------------|----------------------------------- |----------------------------|----------------------------------------|-----------------------------------
| TensorFlow* 1.12 | [Pip install](https://github.com/NervanaSystems/ngraph-tf#option-1-use-a-pre-built-ngraph-tensorflow-bridge) or [Build from source](https://github.com/NervanaSystems/ngraph-tf#option-2-build-ngraph-bridge-from-source) | 20 [Validated workloads] | TensorFlow* 1.13.1 | [Pip install](https://github.com/NervanaSystems/ngraph-tf#option-1-use-a-pre-built-ngraph-tensorflow-bridge) or [Build from source](https://github.com/NervanaSystems/ngraph-tf#option-2-build-ngraph-bridge-from-source) | 20 [Validated workloads]
| MXNet* 1.3 | [Pip install](https://github.com/NervanaSystems/ngraph-mxnet#Installation) or [Build from source](https://github.com/NervanaSystems/ngraph-mxnet#building-with-ngraph-support)| 18 [Validated workloads] | MXNet* 1.3 | [Pip install](https://github.com/NervanaSystems/ngraph-mxnet#Installation) or [Build from source](https://github.com/NervanaSystems/ngraph-mxnet#building-with-ngraph-support)| 18 [Validated workloads]
| ONNX 1.4 | [Pip install](https://github.com/NervanaSystems/ngraph-onnx#installation) | 17 [Validated workloads] | ONNX 1.4 | [Pip install](https://github.com/NervanaSystems/ngraph-onnx#installation) | 17 [Validated workloads]
......
...@@ -23,7 +23,7 @@ packages and prerequisites: ...@@ -23,7 +23,7 @@ packages and prerequisites:
:widths: 25, 15, 25, 20, 25 :widths: 25, 15, 25, 20, 25
:escape: ~ :escape: ~
CentOS 7.4 64-bit, GCC 4.8, CMake 3.4.3, supported, ``wget zlib-devel ncurses-libs ncurses-devel patch diffutils gcc-c++ make git perl-Data-Dumper`` CentOS 7.4 64-bit, GCC 4.8, CMake 3.5.0, supported, ``wget zlib-devel ncurses-libs ncurses-devel patch diffutils gcc-c++ make git perl-Data-Dumper``
Ubuntu 16.04 or 18.04 (LTS) 64-bit, Clang 3.9, CMake 3.5.1 + GNU Make, supported, ``build-essential cmake clang-3.9 clang-format-3.9 git curl zlib1g zlib1g-dev libtinfo-dev unzip autoconf automake libtool`` Ubuntu 16.04 or 18.04 (LTS) 64-bit, Clang 3.9, CMake 3.5.1 + GNU Make, supported, ``build-essential cmake clang-3.9 clang-format-3.9 git curl zlib1g zlib1g-dev libtinfo-dev unzip autoconf automake libtool``
Clear Linux\* OS for Intel Architecture, Clang 5.0.1, CMake 3.10.2, experimental, bundles ``machine-learning-basic dev-utils python3-basic python-basic-dev`` Clear Linux\* OS for Intel Architecture, Clang 5.0.1, CMake 3.10.2, experimental, bundles ``machine-learning-basic dev-utils python3-basic python-basic-dev``
...@@ -185,13 +185,13 @@ The process documented here will work on CentOS 7.4. ...@@ -185,13 +185,13 @@ The process documented here will work on CentOS 7.4.
.. code-block:: console .. code-block:: console
$ wget https://cmake.org/files/v3.4/cmake-3.4.3.tar.gz $ wget https://cmake.org/files/v3.4/cmake-3.5.0.tar.gz
$ tar -xzvf cmake-3.4.3.tar.gz $ tar -xzvf cmake-3.5.0.tar.gz
$ cd cmake-3.4.3 $ cd cmake-3.5.0
$ ./bootstrap --system-curl --prefix=~/cmake $ ./bootstrap --system-curl --prefix=~/cmake
$ make && make install $ make && make install
#. Clone the `NervanaSystems` ``ngraph`` repo via HTTPS and use Cmake 3.4.3 to #. Clone the `NervanaSystems` ``ngraph`` repo via HTTPS and use Cmake 3.5.0 to
build nGraph Libraries to ``~/ngraph_dist``. This command enables ONNX build nGraph Libraries to ``~/ngraph_dist``. This command enables ONNX
support in the library (optional). support in the library (optional).
......
...@@ -73,11 +73,11 @@ author = 'Intel Corporation' ...@@ -73,11 +73,11 @@ author = 'Intel Corporation'
# built documents. # built documents.
# #
# The short X.Y version. # The short X.Y version.
version = '0.17' version = '0.18'
# The Documentation full version, including alpha/beta/rc tags. Some features # The Documentation full version, including alpha/beta/rc tags. Some features
# available in the latest code will not necessarily be documented first # available in the latest code will not necessarily be documented first
release = '0.17.0' release = '0.18.0'
# The language for content autogenerated by Sphinx. Refer to documentation # The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages. # for a list of supported languages.
......
...@@ -18,7 +18,7 @@ cloned from one of our GitHub repos and built to connect to nGraph device ...@@ -18,7 +18,7 @@ cloned from one of our GitHub repos and built to connect to nGraph device
backends while maintaining the framework's programmatic or user interface. Bridges backends while maintaining the framework's programmatic or user interface. Bridges
currently exist for the TensorFlow\* and MXNet\* frameworks. currently exist for the TensorFlow\* and MXNet\* frameworks.
ONNX is not a framework; however, it can be used with nGraph's :doc:../python_api/index` ONNX is not a framework; however, it can be used with nGraph's :doc:`../python_api/index`
to import and execute ONNX models. to import and execute ONNX models.
.. figure:: ../graphics/whole-stack.png .. figure:: ../graphics/whole-stack.png
...@@ -49,7 +49,7 @@ like TensorFlow and PyTorch. ...@@ -49,7 +49,7 @@ like TensorFlow and PyTorch.
:width: 725px :width: 725px
:alt: Translation flow to nGraph function graph :alt: Translation flow to nGraph function graph
.. _tune the workload to extract best performance: https://ai.intel.com/accelerating-deep-learning-training-inference-system-level-optimizations .. _tune the workload to extract best performance: https://ai.intel.com/accelerating-deep-learning-training-inference-system-level-optimizations
.. _a few small: https://software.intel.com/en-us/articles/boosting-deep-learning-training-inference-performance-on-xeon-and-xeon-phi .. _a few small: https://software.intel.com/en-us/articles/boosting-deep-learning-training-inference-performance-on-xeon-and-xeon-phi
.. quantize.rst: .. ops/quantize.rst:
######## ########
Quantize Quantize
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment