• Mateusz Bencer's avatar
    [Spec] Implement support for axes input of LRN op in reference implementation (#3454) · b18cb73d
    Mateusz Bencer authored
    * Axes input was added to LRN
    
    * Unit tests for axes shape check were added
    
    * LRN node deserialization was updated
    
    * Fixed EOF and clang style applied
    
    * Changed Constant to Parameter type in unit tests
    
    * Expanded LRN reference ingterface
    
    * Fixed LRN assert description
    
    * Fixed passing arguments
    
    * Reference implementation for one axis
    
    * Implementation for channel
    
    * Implementation for hw
    
    * working on recurence version
    
    * Implemented recurence version for hw
    
    * Reference implementation code refactor
    
    * Fixed ref LRN implementation and added tests
    
    * Added 6D unit test
    
    * Clang styles applied
    
    * Code review remarks introduced
    
    * Support for dynamic shape of axes input
    
    * Clang styles applied
    
    * Code review remarks introduced
    
    * Added checking if axes values are in correct range
    
    * Clang styles applied
    
    * Removed redundant include
    
    * Code review remarks introduced
    b18cb73d
Name
Last commit
Last update
.ci Update jenkins-trigger.groovy
cmake [MLIR] Add config files for LIT testing (#3523)
contrib/docker fix a few typos (#2451)
doc Update documentation for ngraph_onnx building (#3528)
licenses Add MLIR license file to ngraph/licenses/
maint clang-format comments: /doc/examples (#3477)
python [Py] Added operators Shuffle Channels, Squared Difference and Squeeze to Python API. (#3393)
src [Spec] Implement support for axes input of LRN op in reference implementation (#3454)
test [Spec] Implement support for axes input of LRN op in reference implementation (#3454)
.clang-format clang-format comments: /src/ngraph/op/util (#3498)
.gitattributes Normalize line endings (#1649)
.gitignore Ignore YouCompleteMe config file. (#2459)
.travis.yml Travis tests bugfix (#1550)
ABOUT.md Leona/doc logo (#2565)
CMakeLists.txt [mlir] Bump mlir repo 8/20/2019 (#3493)
CODEOWNERS Leona/markdown links (#3529)
CONTRIB.md Leona/markdown links (#3529)
LICENSE Add LICENSE and switch to Intel Copyright (#466)
README.md Leona/markdown links (#3529)
VERSION.in Auto generate version number and apply it to install dir and libngraph.so (#925)
changes.md Maintain control_deps in replace_node (#3138)
ecosystem-overview.md resolve git issue 2852 (#2870)

nGraph Compiler stack License Build Status

Quick start

To begin using nGraph with popular frameworks, please refer to the links below.

Framework (Version) Installation guide Notes
TensorFlow* Pip install or Build from source 20 Validated workloads
ONNX 1.5 Pip install 17 Validated workloads

Python wheels for nGraph

The Python wheels for nGraph have been tested and are supported on the following 64-bit systems:

  • Ubuntu 16.04 or later
  • CentOS 7.6
  • Debian 10
  • macOS 10.14.3 (Mojave)

Frameworks using nGraph Compiler stack to execute workloads have shown up to 45X performance boost when compared to native framework implementations. We've also seen performance boosts running workloads that are not included on the list of
Validated workloads, thanks to nGraph's powerful subgraph pattern matching.

Additionally we have integrated nGraph with PlaidML to provide deep learning performance acceleration on Intel, nVidia, & AMD GPUs. More details on current architecture of the nGraph Compiler stack can be found in Architecture and features, and recent changes to the stack are explained in Release Notes.

What is nGraph Compiler?

nGraph Compiler aims to accelerate developing AI workloads using any deep learning framework and deploying to a variety of hardware targets. We strongly believe in providing freedom, performance, and ease-of-use to AI developers.

The diagram below shows deep learning frameworks and hardware targets supported by nGraph. NNP-L and NNP-I in the diagram refer to Intel's next generation deep learning accelerators: Intel:registered: Nervana:tm: Neural Network Processor for Learning and Inference respectively. Future plans for supporting addtional deep learning frameworks and backends are outlined in the ecosystem section.

While the ecosystem shown above is all functioning, we have validated performance for deep learning inference on CPU processors, such as Intel:registered: Xeon:registered: for the Beta release of nGraph. The Gold release is targeted for June 2019; it will feature broader workload coverage including quantized graphs (int8) and will implement support for dynamic shapes.

Our documentation has extensive information about how to use nGraph Compiler stack to create an nGraph computational graph, integrate custom frameworks, and to interact with supported backends. If you wish to contribute to the project, please don't hesitate to ask questions in GitHub issues after reviewing our contribution guide below.

How to contribute

We welcome community contributions to nGraph. If you have an idea how to improve it:

  • See the contrib guide for code formatting and style guidelines.
  • Share your proposal via GitHub issues.
  • Ensure you can build the product and run all the examples with your patch.
  • In the case of a larger feature, create a test.
  • Submit a pull request.
  • Make sure your PR passes all CI tests. Note: You can test locally with make check.
  • We will review your contribution and, if any additional fixes or modifications are necessary, may provide feedback to guide you. When accepted, your pull request will be merged to the repository.