- 15 Jan, 2019 1 commit
-
-
Adam Procter authored
-
- 14 Jan, 2019 1 commit
-
-
Rob Earhart authored
-
- 13 Jan, 2019 1 commit
-
-
Fenglei authored
-
- 12 Jan, 2019 4 commits
-
-
Sergey Shalnov authored
-
Rob Earhart authored
* Use static cast where possible * Tensor API update * Move prefix reshape elision to be a general pass * Use pass config to select Winograd optimization * Use get_is_transpose() to detect transposes * Use get_default_order to build AxisVectors
-
Sergey Shalnov authored
* IntelGPU backend: Use new clDNN version 12.1 * PR2280. Comments addressed
-
Robert Kimball authored
* first cut at raspberry pi backend * rename rpi to generic cpu * disable cursed test
-
- 11 Jan, 2019 1 commit
-
-
Nick Korovaiko authored
-
- 09 Jan, 2019 2 commits
-
-
Nick Korovaiko authored
-
Sang Ik Lee authored
-
- 08 Jan, 2019 8 commits
-
-
Sang Ik Lee authored
* Allow external mklml outside of prebuilt mkldnn install directory. * Limit prebuilt mkl-dnn support to Linux.
-
Sergey Shalnov authored
-
Adam Rogowiec authored
-
Nishant Patel authored
* Fix signedconv op * Add assert and change the dynamic scale test case * Change assert * Update quantized_conv_bias.cpp * Update quantized_conv_bias.cpp
-
Pruthvi authored
* - made changes to slicing logic in the rnn input matrix fusion call back - this fixes bug in the GNMT * - fix unit test seg fault - add sorting slices logic make the replace_node easier * i) add check for overlapping slices ii) addressed PR comments * remove ambiguity check
-
Nick Korovaiko authored
* any/all stop-gap CPU implementation * remove pass
-
Rob Earhart authored
* Use plaidml cmake config * Require PlaidML if requested * Don't install libplaidml We can assume that it's correctly installed on the target, especially since it needs to be correctly installed in order to find its configuration files.
-
Nick Korovaiko authored
-
- 07 Jan, 2019 4 commits
-
-
Artur Wojcik authored
Signed-off-by: Artur Wojcik <artur.wojcik@intel.com>
-
Adam Rogowiec authored
* Enable support for group attribute. * UT for ConvTranspose with groups. * Validate group attribute value. * Move helper function to unnamed namespace. * Access values with bounds checking. * Fix spelling.
-
gcwenger authored
* Simplified & tightened all_close_f parameters Removed specification of mantissa bits for all_close_f in favor of just specifying tolerance bits. Tightened up all_close_f default. Fixed LRN unit test which had insufficient result precision to pass tighter all_close_f tolerance. * Addressed PR comments. Reworked mantissa bit and tolerance constants. Clarified and improved graph comparison tolerance calculation flexibility. Clarified unit test tolerance testing.
-
Adam Rogowiec authored
* UT for Softplus ONNX operator testing edge cases. * Rename UT model name. * Handle overflows. * Add UT for ininite values and check them correctly. * Update values in comment
-
- 05 Jan, 2019 1 commit
-
-
Chris Sullivan authored
* Separate out external function base class. * pt1 first step to removing m_writer from GPU_Emitter. * pt2 add gpu_internal function skeleton * pt3 temporarily add to gpu_backend for prototyping. * pt4 add call frame (partial) and runtime constructor * pt 5 implement resolution for function memory reservations. build new tensor wrapper for use with call frame. * pt 6 resolve compilation errors. * pt 7 Add host emitter for emitting host primtives and implement in gpu emitter. * pt 8 add compile time manifest. * pt 9 add simple runtime tracer. * pt 10 seperate runtimes for different functions. index by function name, should switch to using function instance_id for look up performance. * pt 11 add function call interface and support nested call frames * pt 12 Reshape elimination check in emitter needs to include offset. * pt 13 Add default indentation to all op emissions in gpu external functions. * pt 14 fix constant mem reservation (should not depend on the tmeporary buffers existence check. * pt 15 backward pooling for avg pool requires only one param. rather than passing this param three times, this commit changes the runtime to detect if its avgpooling and pass the appropriate pointers. This is a hold over until max and avgpool are refactored into separate cudnn emitters. * pt 16 update cmake compatibility. gpu backend can now be built without clang via NGRAPH_DEX_ONLY. if this cmake variable is not define, then both clang codegen (via gpu external function) and interpreter (via gpu internal function) modes will be built. for now codegen is the default backend but can be explicitly disabled by setting the env. variable to NGRAPH_CODEGEN=0/FALSE/NO/etc. additional note: made codegen::CodeWriter header-only so that it can be used independently of whether the clang codegen library is compiled. * pt 17 fix issues with merge from master * pt 18 factor compile function into a few virtual calls so that common passes can be added in a single location for both backends. * pt 19 formatting * Remove code_writer.cpp from cmake and disable (temporarily) some reduce tests that require changes to gpu_emitter.cpp * Move call frame and runtime constructor implementations to source files. * Use member m_common_function_string. * Applying analogous bug fix as found in #2145 * Remove underscore from GPU_CompiledFunction, GPU_ExternalFunction, and GPU_InternalFunction. * Made static members of GPUCompiledFunction static methods. * Remove 'No' codegen options, use std::toupper and applied format * review comments * Remove vector overload for resolve inputs/outputs in GPUCallFrame. * Remove diagnostic pragmas
-
- 03 Jan, 2019 6 commits
-
-
Nick Korovaiko authored
* fix throw in cpu_memory_optimization * add input_tensor back * more descriptive bail-out msg
-
Robert Kimball authored
* add version.hpp to ngraph install files for external backends * update date
-
Robert Kimball authored
* update licenses for 2019 * style
-
Adam Rogowiec authored
* Add broadcasting to variadic OP in opset 8. * Apply style format. * Update onnx_import.cpp
-
Rob Earhart authored
* nGraph compat updates * Simplify op structure * Add zero-dim elimination pass * Add logical data conversion pass * Add graphviz support * Add implicit broadcast pass * Elide unnecessary reshape ops * Merge reshapes into convolutions * Add winograd convolution support * Allow three-input maxpool backprop * Add concat elision * Combine replication operations * Elide unneeded replicates * Style update
-
Scott Cyphers authored
* Fix numeric instability in batchnorm bprop * Another instability
-
- 02 Jan, 2019 2 commits
-
-
Robert Kimball authored
* print llvm build choice and allow cmake arg to override env var * fix cmake error * use existing function
-
Artur Wojcik authored
* [ONNX] detect automatically value types of TensorProto and AttributeProto Signed-off-by: Artur Wojcik <artur.wojcik@intel.com> * onnx: style apply Signed-off-by: Artur Wojcik <artur.wojcik@intel.com>
-
- 31 Dec, 2018 1 commit
-
-
Robert Kimball authored
-
- 29 Dec, 2018 1 commit
-
-
Robert Kimball authored
-
- 28 Dec, 2018 2 commits
-
-
Robert Kimball authored
-
Sang Ik Lee authored
* Forward nGraph's C/C++ compiler, build type and generator information to cmake based external projects. * Fix typo. * Pass generator related info properly. * Googletest is using DEBUG_POSTFIX * TBB uses postfix for debug lib. * Fix typo.
-
- 23 Dec, 2018 2 commits
-
-
Robert Kimball authored
* Add GPUH hybrid backend * update manifests * update node operator<< * fix GOE * remove debug * remove debug * more cleanup * add parent support to cpu and intel gpu backend tensors * cleanup * fix odd failure when printing node during construction * fix node output * address review comments * style
-
Robert Kimball authored
* update build byproducts to support ninja * remove unused cmake code * more cmake cleanup * display error message if Ninja generator is requested * fix mkldnn ext project * revert onnx cmake file * revert protobuf cmake file * revert mlsl cmake file * more fixing
-
- 22 Dec, 2018 3 commits
-
-
Adam Procter authored
-
Robert Kimball authored
-
Chris Sullivan authored
-