• Scott Cyphers's avatar
    Cyphers/master to r20 (#2978) · 0995b710
    Scott Cyphers authored
    * nbench special handle op name for "NNP_XXX" (#2883)
    
    * remove throw from dtor (#2854)
    
    * add special handle for the 'NNP_xx' op name
    
    * style
    
    * add special handle for the 'NNP_xx' op name
    
    * style
    
    * use description as suggest by bob
    
    * Remove parent from PlaidML tensor initializer (#2923)
    
    * Remove parent from PlaidML tensor initializer
    
    * Remove plaidml tensor parent plumbing
    
    * style
    
    * Add support for move semantics to AlignedBuffer (#2956)
    
    * Add move operations to AlignedBuffer
    
    * unit test
    
    * Create mkldnn primitives at first iteration for codegen - part2 (#2859)
    
    * Create mkldnn primitives at first iteration for CODEGEN.
    
     OPs: add, lstm, and rnn.
    
    *  OPs: batchnorm.
    
    *  OPs: concat and lrn.
    
    Remove dead code.
    
    * Skip in place concat, relu, reshape, and slice when building node_primitive_string_deps_index map.
    
    * Change NGRAPH_ASSERT to NGRAPH_CHECK.
    
    * Address PR Feedback.
    
    * Create mkldnn primitives at first iteration for CODEGEN.
     OPs: convertlayout, relu, leakyrelu, boundedrelu, sigmoid, softmax, slice.
    
    * Fix bugs.
    
    *  OPs: quantizedconcat.
    
    Check if there are descriptors before emitting code to read desc_file.
    
    *  OPs: convolution backward.
    
    Use macro to write mkldnn memory dims to generated file.
    
    *  OPs: MaxPoolWithIndices and MaxPoolWithIndicesBackprop.
    
    Add unit tests for MaxPoolWithIndices, MaxPoolWithIndicesBackprop, and MaxPoolBackprop.
    
    * Fix style error.
    
    *  OPs: AvgPoolBackprop and MaxPoolBackprop.
    
    Add unit test for AvgPoolBackprop.
    
    *  OPs: DeconvolutionBias.
    
    *  OPs: Quantize and Dequantize.
    
    *  OPs: QuantizedDot and QuantizedDotBias.
    
    * Use reference kernel for QuantizedConvolution for CODEGEN when mkldnn does not support the parameter types.
    Get scales for quantization ops in cpu_emitter.
    
    * Fix Windows build error: add CPU_BACKEND_API.
    
    * Use template for quantization ops.
    
    *  OPs: QuantizedMatmul.
    
    Emit referece kernel for QuantizedDot in CODEGEN.
    
    * Remove QuantizedDot from get_scale_index.
    
    * Address PR feedback.
    
    * [FusedOps] Split (#2951)
    
    * Split op skeleton
    
    * Two ways to construct a fused Split to be able to use it in onnx importer
    
    * refactor: move the util::split() helper functions to the core
    
    * Split's decompose_op() implementation using a helper function
    
    * Use fused Split in the onnx_importer
    
    * Code formatting
    
    * PR feedback
    
    * Split helpers moved to ngraph/builder
    
    * Basic UT - split a 1D tensor to 3 equal parts
    
    * UT: Split 2D tensor into variable length parts
    
    * Code formatting
    
    * Catch the proper type of exception in the onnx_importer split()
    
    * Initialize members in the correct order
    
    * Type prop tests for Split
    
    * Code formatting
    
    * PR feedback
    
    * Add more infrastructure for specialization of cloned graphs (#2949)
    
    * Virtualize some things that crash when layout descriptor is missing
    
    * More shape specialization
    
    * (very bare) skeleton for dyn elimination
    
    * Miscellaneous
    
    * Lift i32->int64-only restriction on constant folding for Convert
    
    * Add constant folding for ShapeOf, and some tests for new constant folders
    
    * Tests for DynElimination
    
    * Rename specialize_shapes to specialize_function, and add a unit test for value substitution
    
    * Roll back overeager API change in dyn slice bprop (it has to handle right-indexed axes; bummer)
    
    * Add a test for dynamic usage of transpose op
    
    * Fix warning/error about variable shadowing
    
    * Strengthen checks in apply_permutation
    
    * Propagate Constant shapes through Transpose
    
    * Add CHANGE_DYNAMIC_STATE where appropriate
    
    * PR feedback, and fix unit test failure
    
    * Fix PR reference in comment
    
    * PR comments
    
    * Comments for helper funcs
    
    * Remove unique_ptr indirection for the AlignedBuffers
    
    * Fix incorrect indexing of AlignedBuffer vector (whoops\!)
    
    * Remove unnecessary CHANGE_DYAMIC_STATEs
    
    * De-update pass property unit test for const folding
    
    * Replace mystery runes with all_pass_property_off
    
    * Change FusionType to enum class and use EnumMask (#2957)
    
    * constexpr ctor for EnumMask
    
    * added pass properties to core passes.
    
    * change fusion type to have better type safety.
    
    * refactor to use enum mask.
    
    * remove extra code.
    
    * added constants for FusionType backward compatibility.
    
    * spelling.
    
    * grammar fix.
    
    * update visualize tree file extenstions and output formats (#2954)
    
    * update visualize tree file extenstions and output formats
    
    * fix runtime error
    
    * Update version, clean up ToC, add more detail to section on inspectin… (#2947)
    
    * Update version, clean up ToC, add more detail to section on inspecting graphs...
    
    * Minor adjustments to version module
    * Move distributed training page to extras since there's not much there
    * Fix links that break when doing that
    * Consistent casing on section titles
    * Orphan governance page so we don't have blank/empty links
    * Update release notes with new version module structure
    
    * PR feedback
    
    * Allow NGRAPH_VISUALIZE_TREE_OUTPUT_SHAPES to output partial shapes (#2959)
    
    * Remove functions from cpu which were moved to core (#2962)
    
    * Remove functions from cpu which were moved to core
    
    * Fix a typo
    
    * Remove unused function
    
    * Move zero padded conv fusions from CPUFusion to CoreFusion. (#2969)
    
    * Move zero padded conv fusions from CPUFusion to CoreFusion.
    
    * Address PR feedback: move unit tests to core_fusion.
    
    * Fix Convert for boolean output type in CODEGEN. (#2958)
    
    * Create tensor for the primary backend (#2970)
    
    * create tensor for the primary backend
    
    * move private objects to protected
    
    *  [Fused] LeakyRelu op (#2919)
    
    * [Fused] LeakyRelu op
    
    * Add LeakyRelu to serializer
    
    * Add unit tests
    
    * Fix merge branch 'master' into mkarzyns/fused_leaky_relu
    
    * Change broadcasting rules to NumPy style
    
    * Remove std:: and ngraph:: prefixes
    
    * Rename CPU Runtime LeakyRelu to CPULeakyRelu
    
    * Style apply
    
    * Fix cpu_fusion.fuse_leaky_relu test
    
    * Use eigen's tanh in the fused sigmoid multiply kernel (#2946)
    
    * Merge branch 'master' into mkarzyns/fused_leaky_relu
    
    * Add LeakyRelu to Intel GPU backend op list
    
    * Add LeakyRelu to Intel GPU backend op list
    
    * Make private members protected in hybrid classes (#2975)
    
    * make private members protected in hybrid classes
    
    * allow overriding the passes
    
    * [ONNX] Unit tests for QLinearMatMul (#2706)
    
    * [ONNX] Unit test models for QLinearMatMul
    
    * [ONNX] Extended types support for NgraphTestCase
    
    * [ONNX] Move the value comparators to the NgraphTestCase class
    
    * Add test cases
    
    * Add shape checking
    
    * disable GPU tests
    
    * IntelGPU backend: Switch to clDNN which is compatible with gcc4.8 (#2961)
    
    * Added accessor methods for layer op attributes (#2964)
    
    * Added accessor methods for layer op attributes
    
    * style fixes and addressed PR feedback
    
    * Add save/load API to runtime (#2955)
    
    * API defined
    
    * add unit test for save/load with INTERPRETER
    
    * Update per review comments
    
    * fix compiler error
    
    * Backport fix from #2973 (#2976)
    
    * CTCGreedyDecoder layer op (#2965)
    
    * Added CTCGreedyDecoder layer op
    
    * Added comment on seq_len validation checks
    
    * Switch some get_inputs uses to use the newer inputs (#2968)
    
    * Switch some get_inputs uses to use the newer inputs
    
    * Review comments
    
    * update a few files to build on windows (#2974)
    
    * update a few files to build on windows
    
    * more fixes
    0995b710
Name
Last commit
Last update
..
ngraph_theme Loading commit data...
source Loading commit data...
CMakeLists.txt Loading commit data...
Makefile Loading commit data...
cmake_install.cmake Loading commit data...
conf.py Loading commit data...
generate_python_api_doc.sh Loading commit data...
ngraph.doxyfile Loading commit data...