Commit 790ca60f authored by L.S. Cook's avatar L.S. Cook Committed by Scott Cyphers

Prerelease docreview (#652)

* add some docs for onnx

* Update install documentation

- Clarify the make -j use case from bob's PR
- Fix the words to be less wordy in some places - unit test does
  not need its own section
- Add placeholder for onnx integration discussion w Michal
- See if doxyfile adjustment can build docs faster

* Update install documentation

- Clarify the make -j use case from Bob's PR with Adam's suggestion
- Fix the words to be less wordy in some places - unit test does
  not need its own section
- Add placeholder for onnx integration discussion w Michal
- Fix Framework Integration Guide top links to be bullets
- See if doxyfile adjustment can build docs faster
- Fix grammar to the rendered title of embedded link

* Update framework-integration-guides.rst

* Update install.rst

* Update install.rst

* WIP review docs for onnx tutorial

- Fixed install instructions to clariy use of make -j as per discussion
- Added tutorial on how to run inference for CIFAR10 ResNet
- Condensed some instructions to involve less clicks when reading on web

* Friendly API for Ngraph++ Python bindings

* Preview doc build for review

* Preview doc build for review small edit

* More collab editing changes

* Make sure intro gets updated in both places

* Really add the changes this time

* PR review comments added

* Match changes made on 79c77fdd to resolve merge conflict
parent aa3815c5
# Intel® nGraph™ library
Welcome to Intel nGraph, an open source C++ library for developers of Deep
Learning (DL) systems. Here you will find a suite of components, APIs, and
documentation that can be used to compile and run Deep Neural Network (DNN)
models defined in a variety of frameworks.
Welcome to Intel® nGraph™, an open source C++ library and compiler. This
project enables modern compute platforms to run and train Deep Neural Network
(DNN) models. It is framework-neutral and supports a variety of backends
used by Deep Learning (DL) frameworks.
The nGraph library translates a framework’s representation of computations into
an Intermediate Representation (IR) designed to promote computational efficiency
......@@ -17,5 +17,5 @@ See our [install] docs for how to get started.
For this early release, we provide [framework integration guides] to compile
MXNet and TensorFlow-based projects.
[install]: http://ngraph.nervanasys.com/docs/cpp/installation.html
[framework integration guides]:http://ngraph.nervanasys.com/docs/cpp/framework-integration-guides.html
[install]: http://ngraph.nervanasys.com/index.html/install.html
[framework integration guides]: http://ngraph.nervanasys.com/index.html/framework-integration-guides.html
# ******************************************************************************
# Copyright 2018 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ******************************************************************************
import onnx
onnx_protobuf = onnx.load('/path/to/model/cntk_ResNet20_CIFAR10/model.onnx')
# Convert ONNX model to an ngraph model
from ngraph_onnx.onnx_importer.importer import import_onnx_model
ng_model = import_onnx_model(onnx_protobuf)[0]
# Using an ngraph runtime (CPU backend), create a callable computation
import ngraph_api as ng
runtime = ng.runtime(manager_name='CPU')
resnet = runtime.computation(ng_model['output'], *ng_model['inputs'])
# Load or create an image
import numpy as np
picture = np.ones([1, 3, 32, 32])
# Run ResNet inference on picture
resnet(picture)
......@@ -2109,7 +2109,7 @@ DOT_GRAPH_MAX_NODES = 512
# Minimum value: 0, maximum value: 1000, default value: 0.
# This tag requires that the tag HAVE_DOT is set to YES.
MAX_DOT_GRAPH_DEPTH = 0
MAX_DOT_GRAPH_DEPTH = 5
# Set the DOT_TRANSPARENT tag to YES to generate images with a transparent
# background. This is disabled by default, because dot on Windows does not seem
......
......@@ -62,9 +62,9 @@ author = 'Intel Corporation'
# built documents.
#
# The short X.Y version.
version = 'alpha'
version = '0.1.0'
# The full version, including alpha/beta/rc tags.
release = 'alpha'
release = 'v0.1.0'
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
......
......@@ -4,15 +4,15 @@
Framework Integration Guides
#############################
.. contents::
* :ref:`mxnet_intg`
* :ref:`tensorflow_intg`
.. _mxnet_intg:
Compile MXNet\* with ``libngraph``
==================================
.. important:: These instructions pick up from where the :doc:`installation`
.. important:: These instructions pick up from where the :doc:`install`
installation instructions left off, so they presume that your system already
has the library installed at ``$HOME/ngraph_dist`` as the default location.
If the |nGl| code has not yet been installed to your system, please go back
......@@ -97,7 +97,7 @@ Compile MXNet\* with ``libngraph``
Build TensorFlow\* with an XLA plugin to ``libngraph``
======================================================
.. important:: These instructions pick up where the :doc:`installation`
.. important:: These instructions pick up where the :doc:`install`
installation instructions left off, so they presume that your system already
has the |nGl| installed. If the |nGl| code has not yet been installed to
your system, please go back to complete those steps, and return here when
......
......@@ -95,5 +95,10 @@ Glossary
model description
A description of a program's fundamental operations that are
used by a framework to generate inputs for computation.
used by a framework to generate inputs for computation.
export
The serialized version of a trained model that can be passed to
one of the nGraph backends for computation.
.. import.rst:
###############
Import a model
###############
:ref:`from_onnx`
.. TODO Make sure that this is the first page data scientists find when they
.. are simply trying to run a trained model; they DO NOT need to do a system
.. install of the Intel nGraph++ bridges; they can use our Python APIs to run
.. a trained model.
Intel nGraph APIs can be used to import a model that has been *exported* from
a Deep Learning framework. The export producess a file with a serialized model
that can be loaded and passed to one of the nGraph backends for execution.
.. _from_onnx:
Importing models from ONNX
===========================
The most-widely supported :term:`export` format available today is `ONNX`_.
Models that have been serialized to ONNX are easy to identify; they are
usually named ``<some_model>.onnx`` or ``<some_model>.onnx.pb``. These
`tutorials from ONNX`_ describe how to turn trained models into an
``.onnx`` export.
.. important:: If you landed on this page and you already have an ``.onnx``
or ``.onnx.pb`` formatted file, you should be able to run the inference
without needing to dig into anything from the "Frameworks" sections. You
will, however, need to have completed the steps described in
our :doc:`../install` guide.
To demonstrate this functionality, we'll use an `already serialized CIFAR10`_
model trained via ResNet20. Remember that this model *has already been trained* to
a degree deemed well enough by a developer, and then exported from a framework
such as Caffe2, PyTorch or CNTK. We are simply going to build an nGraph
representation of the model, execute it, and produce some outputs.
Installing ``ngraph_onnx``
--------------------------
In order to use ONNX models, you will also need the companion tool ``ngraph_onnx``.
``ngraph_onnx`` requires Python 3.5 or higher.
#. First set the environment variables to where we built the nGraph++ libraries;
This code assumes that you followed the default instructions from the
:doc:`../install` guide and that your version of ``ngraph_dist`` can be found
at ``$HOME/ngraph_dist``:
.. code-block:: bash
export NGRAPH_CPP_BUILD_PATH=$HOME/ngraph_dist
export LD_LIBRARY_PATH=$HOME/ngraph_dist/lib
export DYLD_LIBRARY_PATH=$HOME/ngraph_dist/lib # On MacOS
#. Now add *Protocol Buffers* and Python3 PIP dependencies to your system. ONNX
requires Protocol Buffers version 2.6.1 or higher. For example, on Ubuntu:
.. code-block:: console
$ sudo apt install protobuf-compiler libprotobuf-dev python3-pip
#. Checkout the branch named `python_binding`:
.. code-block:: console
$ cd /opt/libraries/ngraph
$ git checkout python_binding
Switched to branch 'python_binding'
Your branch is up-to-date with 'origin/python_binding'.
#. Recursively update the submodule and install the Python dependencies.
.. code-block:: console
$ git submodule update --init --recursive
$ cd python
$ pip3 install -r requirements.txt
$ pip3 install .
#. Finally, clone the ``ngraph-onnx`` repo and use :command:`pip` to
install the Python dependencies for this tool; if you set up your
original nGraph library installation under a ``libraries`` directory
as recommended, it's a good idea to clone this repo there, too.
.. code-block:: console
$ cd /opt/libraries
$ git clone git@github.com:NervanaSystems/ngraph-onnx
$ cd ngnraph-onnx
$ pip3 install -r requirements.txt
$ pip3 install .
Importing a serialized model
-----------------------------
.. Now we can import any model that has been serialized by ONNX,
run Python code locally to interact with that model, create and
load objects, and run inference.
These instructions demonstrate how to run ResNet on an
`already serialized CIFAR10`_ model trained via ResNet20.
Import ONNX and load an ONNX file from disk
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. literalinclude:: ../../../examples/onnx_example.py
:language: python
:lines: 17-19
Convert an ONNX model to an ngraph model
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. literalinclude:: ../../../examples/onnx_example.py
:language: python
:lines: 22-23
Create a callable computation object
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. literalinclude:: ../../../examples/onnx_example.py
:language: python
:lines: 27-29
Load or create an image
~~~~~~~~~~~~~~~~~~~~~~~~
.. literalinclude:: ../../../examples/onnx_example.py
:language: python
:lines: 32-33
Run ResNet inference on picture
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. literalinclude:: ../../../examples/onnx_example.py
:language: python
:lines: 36
Put it all together
===================
.. literalinclude:: ../../../examples/onnx_example.py
:language: python
:lines: 17-37
:caption: "Code to run inference on a CIFAR10 trained model"
If you tested the ``.onnx`` file used in the example above, the outputs
should look something like:
.. code-block:: python
Attempting to write a float64 value to a <Type: 'float32'> tensor. Will attempt type conversion.
array([[ 1.3120822 , -1.6729498 , 4.2079573 , 1.4012246 , -3.5463796 ,
2.343378 , 1.7799224 , -1.6155218 , 0.07770489, -4.2944083 ]],
dtype=float32)
.. Importing models from NNVM
---------------------------
.. if you work on NNVM you can add this instuction here.
.. Importing models from XLA
--------------------------
.. if you work on XLA you can add this instruction here.
.. etc, eof
.. _ONNX: http://onnx.ai
.. _tutorials from ONNX: https://github.com/onnx/tutorials
.. _already serialized CIFAR10: https://github.com/NervanaSystems/ngraph-onnx-val/tree/master/models
......@@ -8,6 +8,7 @@ How to
:caption: How to
execute.rst
import.rst
The "How to" articles in this section explain how to do specific tasks with the
......@@ -16,8 +17,8 @@ if an entity (framework or user) wishes to make use of target-based computationa
resources, it can either:
* Do the tasks programatically through the framework, or
* Provide a clear model definition with documentation for the computational
resources needed.
* Provide a serialized model that can be imported to run on one of the nGraph
backends.
.. note:: This section is aimed at intermediate-level developers working with
the nGraph++ library. It assumes a developer has understanding of the concepts
......@@ -29,22 +30,20 @@ learning systems, we go beyond the use of deep learning primitives, and include
APIs and documentation for developers who want the ability to write programs
that use custom backends. For example, we know that GPU resources can be useful
backends for *some* kinds of algorithmic operations while they impose inherent
limitations and slow down others. We are barely scraping the surface of what is
possible for a hybridized approach to many kinds of training and inference-based
computational tasks.
limitations and slow down others.
One of our goals with the nGraph project is to enable developers with tools to
build programs that quickly access and process data with or from a breadth of
edge and network devices. Furthermore, we want them to be able to make use of
the best kind of computational resources for the kind of data they are processing,
after it has been gathered.
One of our goals with the nGraph++ library is to enable developers with tools to
build programs that quickly access and process data a breadth of edge and network
devices. Furthermore, we want them to be able to make use of the best kind of
computational resources for the kind of data they are processing, after it has
been gathered.
To get started, we've provided a basic example for how to execute a computation
that can run on an nGraph backend; this is analogous to a framework bridge.
To get started, we've provided a basic example for how to :doc:`execute` a
computation with an nGraph backend; this is analogous to a framework bridge.
This section is under development; it will eventually be populated with more
articles geared toward data scientists, algorithm designers, framework developers,
backend engineers, and others. We welcome contributions from the community and
invite you to experiment with the variety of hybridization and performance
extractions available through the nGraph library.
backend engineers, and others. We welcome ideas and contributions from the
community.
......@@ -13,19 +13,17 @@
.. limitations under the License.
.. ---------------------------------------------------------------------------
########################
######################
Intel nGraph++ library
########################
######################
Welcome to Intel® nGraph™, an open source C++ library for developers of
:abbr:`Deep Learning (DL)` (DL) systems. Here you will find a suite of
components, APIs, and documentation that can be used to compile and run
:abbr:`Deep Neural Network (DNN)` (DNN) models defined in a variety of
frameworks.
Welcome to Intel® nGraph™, an open source C++ library and compiler. This
project enables modern compute platforms to run and train :abbr:`Deep Neural Network (DNN)`
models. It is framework-neutral and supports a variety of backends used by
:abbr:`Deep Learning (DL)` frameworks.
.. figure:: graphics/ngraph-hub.png
For this early release, we've provided :doc:`framework-integration-guides` to
compile and run MXNet\* and TensorFlow\*-based projects.
......@@ -53,8 +51,7 @@ Sections
:name: tocmaster
:caption: Table of Contents
installation.rst
testing-libngraph.rst
install.rst
framework-integration-guides.rst
graph-basics.rst
howto/index.rst
......
.. installation:
.. install.rst:
########
Install
......@@ -7,8 +7,8 @@ Install
Build Environments
==================
The |release| version of |project| supports Linux\*-based systems which have
recent updates of the following packages and prerequisites:
The |release| version of |project| supports Linux\*-based systems
with the following packages and prerequisites:
.. csv-table::
:header: "Operating System", "Compiler", "Build System", "Status", "Additional Packages"
......@@ -16,98 +16,88 @@ recent updates of the following packages and prerequisites:
:escape: ~
CentOS 7.4 64-bit, GCC 4.8, CMake 3.2, supported, ``patch diffutils zlib1g-dev libtinfo-dev``
Ubuntu 16.04 (LTS) 64-bit, Clang 3.9, CMake 3.5.1 + GNU Make, supported, ``build-essential cmake clang-3.9 git libtinfo-dev``
Ubuntu 16.04 (LTS) 64-bit, Clang 3.9, CMake 3.5.1 + GNU Make, supported, ``build-essential cmake clang-3.9 git zlib1g libtinfo-dev``
Clear Linux\* OS for Intel Architecture, Clang 5.0.1, CMake 3.10.2, experimental, bundles ``machine-learning-basic dev-utils python3-basic python-basic-dev``
Other configurations may work, but aren't tested; on Ubuntu 16.04 with
``gcc-5.4.0`` or ``clang-3.9``, for example, we recommend adding
``-DNGRAPH_USE_PREBUILT_LLVM=TRUE`` to the :command:`cmake` command in step 4
below. This gets a pre-built tarball of LLVM+Clang from `llvm.org`_, and will
substantially reduce build time.
Other configurations may work, but should be considered experimental with
limited support. On Ubuntu 16.04 with ``gcc-5.4.0`` or ``clang-3.9``, for
example, we recommend adding ``-DNGRAPH_USE_PREBUILT_LLVM=TRUE`` to the
:command:`cmake` command in step 4 below. This fetches a pre-built tarball
of LLVM+Clang from `llvm.org`_, and will substantially reduce build time.
If using ``gcc-4.8``, it may be necessary to add symlinks from ``gcc`` to
``gcc-4.8``, and from ``g++`` to ``g++-4.8``, in your :envvar:`PATH`, even
If using ``gcc`` version 4.8, it may be necessary to add symlinks from ``gcc``
to ``gcc-4.8``, and from ``g++`` to ``g++-4.8``, in your :envvar:`PATH`, even
if you explicitly specify the ``CMAKE_C_COMPILER`` and ``CMAKE_CXX_COMPILER``
flags when building. (You **should NOT** supply the ``-DNGRAPH_USE_PREBUILT_LLVM``
flags when building. (**Do NOT** supply the ``-DNGRAPH_USE_PREBUILT_LLVM``
flag in this case, because the prebuilt tarball supplied on llvm.org is not
compatible with a gcc-4.8 based build.)
compatible with a gcc 4.8-based build.)
Support for macOS is limited; see the `macOS development`_ section at the end of
this page for details.
Support for macOS is limited; see the `macOS development`_ section at the end
of this page for details.
Installation Steps
==================
To build |nGl| on one of the supported systems, the CMake procedure will
install ``ngraph_dist`` to the installing user's ``$HOME`` directory as
the default location. See the :file:`CMakeLists.txt` file for more
information about how to change or customize this location.
The CMake procedure installs ``ngraph_dist`` to the installing user's ``$HOME``
directory as the default location. See the :file:`CMakeLists.txt` file for
details about how to change or customize the install location.
#. (Optional) Create something like ``/opt/local`` and (with sudo permissions),
give ownership of that directory to your user. Under this directory, you can
add a placeholder for ``libraries`` to have a placeholder for the documented
source cloned from the repo:
#. (Optional) Create something like ``/opt/libraries`` and (with sudo),
give ownership of that directory to your user. Creating such a placeholder
can be useful if you'd like to have a local reference for APIs and
documentation, or if you are a developer who wants to experiment with
how to :doc:`../howto/execute` using resources available through the
code base.
.. code-block:: console
.. code-block:: console
$ cd /opt
$ sudo mkdir -p local/libraries
$ sudo chown -R username:username /opt/local
$ sudo mkdir -p /opt/libraries
$ sudo chown -R username:username /opt/libraries
$ cd /opt/libraries
#. Clone the `NervanaSystems` ``ngraph-cpp`` repo to your `/libraries`
directory.
#. Clone the `NervanaSystems` ``ngraph`` repo:
.. code-block:: console
$ cd /opt/local/libraries
$ git clone git@github.com:NervanaSystems/ngraph-cpp.git
$ cd ngraph-cpp
$ git clone git@github.com:NervanaSystems/ngraph.git
$ cd ngraph
#. Create a build directory outside of the ``ngraph-cpp/src`` directory
tree; somewhere like ``ngraph-cpp/build``, for example.
#. Create a build directory outside of the ``ngraph/src`` directory
tree; somewhere like ``ngraph/build``, for example:
.. code-block:: console
$ mkdir build
$ mkdir build && cd build
#. ``$ cd`` to the build directory and generate the GNUMakefiles in the
customary manner from within your ``build`` directory (remember to append the
command with the prebuilt option, if needed):
#. Generate the GNUMakefiles in the customary manner (from within the
``build`` directory). If running ``gcc-5.4.0`` or ``clang-3.9``, remember
that you can also append ``cmake`` with the prebuilt LLVM option to
speed-up the build:
.. code-block:: console
$ cd build && cmake ../ [-DNGRAPH_USE_PREBUILT_LLVM=TRUE]
$ cmake ../ [-DNGRAPH_USE_PREBUILT_LLVM=TRUE]
#. (Optional) Run ``$ make [-jN]`` where ``-jN`` specifies the number of physical
cores to use to build. The example here uses a configuration of ``j8``,
which is good for a system install using an 8-core Intel® Xeon® CPU processor.
This step is **not recommended** for machines with too little RAM available,
such as those whose RAM is superceded by Docker or VM tasks.
#. Run ``$ make`` and ``make install`` to install ``libngraph.so`` and the
header files to ``$HOME/ngraph_dist``:
.. code-block:: console
$ make -j8
#. Run ``make install`` to install ``libngraph.so`` and the header files to the
default location of ``$HOME/ngraph_dist``
$ make # note: make -j <N> may work, but sometimes results in out-of-memory errors if too many compilation processes are used
.. code-block:: console
$ make install
#. (Optional, requires `doxygen`_, `Sphinx`_, and `breathe`_). Run ``make html``
inside the ``doc/sphinx`` directory of the cloned source to build a copy of
the `website docs`_ locally. The low-level API docs with inheritance diagrams
and collaboration diagrams can be found inside the ``/docs/doxygen/``
directory.
the `website docs`_ locally. The low-level API docs with inheritance and
collaboration diagrams can be found inside the ``/docs/doxygen/`` directory.
.. macos_development:
macOS development
-----------------
.. note:: The macOS*\ platform is officially unsupported.
.. note:: The macOS*\ platform is not officially unsupported.
The repository includes two scripts (``maint/check-code-format.sh`` and
``maint/apply-code-format.sh``) that are used respectively to check adherence
......@@ -123,9 +113,57 @@ according to those conventions. These scripts require the command
$ ln -s /usr/local/opt/llvm@3.9/bin/clang-format $HOME/bin/clang-format-3.9
$ echo 'export PATH=$HOME/bin:$PATH' >> $HOME/.bash_profile
Test
====
The |InG| library code base uses GoogleTest's\* `googletest framework`_
for unit tests. The ``cmake`` command from the :doc:`install` guide
automatically downloaded a copy of the needed ``gtest`` files when
it configured the build directory.
To perform unit tests on the install:
#. Create and configure the build directory as described in our
:doc:`install` guide.
#. Enter the build directory and run ``make check``:
.. code-block:: console
$ cd build/
$ make check
Compile a framework with ``libngraph``
======================================
After building and installing nGraph++ on your system, there are two likely
paths for what you'll want to do next: either compile a framework to run a DL
training model, or load an import "already-trained" model for inference on an
Intel nGraph++ enabled backend
For this former case, this early |release| release, :doc:`framework-integration-guides`,
can help you get started with a training a model on a supported framework.
* :doc:`MXNet<framework-integration-guides>` framework,
* :doc:`TensorFlow<framework-integration-guides>` framework, and
* neon™ `frontend framework`_.
For the latter case, if you've followed a tutorial from `ONNX`_, and you have an
exported, serialized model, you can skip the section on frameworks and go directly
to our :doc:`../howto/import` documentation.
Please keep in mind that both of these are under continuous development, and will
be updated frequently in the coming months. Stay tuned!
.. _doxygen: https://www.stack.nl/~dimitri/doxygen/
.. _Sphinx: http://www.sphinx-doc.org/en/stable/
.. _breathe: https://breathe.readthedocs.io/en/latest/
.. _llvm.org: https://www.llvm.org
.. _NervanaSystems: https://github.com/NervanaSystems/ngraph-cpp/blob/master/README.md
.. _NervanaSystems: https://github.com/NervanaSystems/ngraph/blob/master/README.md
.. _website docs: http://ngraph.nervanasys.com/index.html/index.html
.. _googletest framework: https://github.com/google/googletest.git
.. _ONNX: http://onnx.ai
.. _frontend framework: http://neon.nervanasys.com/index.html/
......@@ -3,10 +3,10 @@
About
=====
Welcome to the Intel nGraph project, an open source C++ library for developers
of :abbr:`Deep Learning (DL)` (DL) systems. Here you will find a suite of
components, APIs, and documentation that can be used to compile and run
:abbr:`Deep Neural Network (DNN)` models defined in a variety of frameworks.
Welcome to Intel® nGraph™, an open source C++ library and compiler. This
project enables modern compute platforms to run and train
:abbr:`Deep Neural Network (DNN)`models. It is framework-neutral and supports
a variety of backends used by :abbr:`Deep Learning (DL)` frameworks.
.. figure:: ../graphics/ngraph-hub.png
......
.. testing-libngraph:
########################
Test the nGraph library
########################
The |InG| library code base uses the `GTest framework`_ for unit tests. CMake
automatically downloads a copy of the required GTest files when configuring the
build directory.
To perform the unit tests:
#. Create and configure the build directory as described in our
:doc:`installation` guide.
#. Enter the build directory and run ``make check``:
.. code-block:: console
$ cd build/
$ make check
Compiling a framework with ``libngraph``
========================================
After building and installing the nGraph library to your system, the next
logical step is to compile a framework that you can use to run a
training/inference model with one of the backends that are now enabled.
For this early |release| release, we're providing :doc:`framework-integration-guides`,
for:
* :doc:`MXNet<framework-integration-guides>` framework,
* :doc:`TensorFlow<framework-integration-guides>` framework, and
* neon™ `frontend framework`_.
Integration guides for other frameworks are tentatively forthcoming.
.. _GTest framework: https://github.com/google/googletest.git
.. _frontend framework: http://neon.nervanasys.com/index.html/
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment