• Scott Cyphers's avatar
    Cyphers/master to r20 (#2978) · 0995b710
    Scott Cyphers authored
    * nbench special handle op name for "NNP_XXX" (#2883)
    
    * remove throw from dtor (#2854)
    
    * add special handle for the 'NNP_xx' op name
    
    * style
    
    * add special handle for the 'NNP_xx' op name
    
    * style
    
    * use description as suggest by bob
    
    * Remove parent from PlaidML tensor initializer (#2923)
    
    * Remove parent from PlaidML tensor initializer
    
    * Remove plaidml tensor parent plumbing
    
    * style
    
    * Add support for move semantics to AlignedBuffer (#2956)
    
    * Add move operations to AlignedBuffer
    
    * unit test
    
    * Create mkldnn primitives at first iteration for codegen - part2 (#2859)
    
    * Create mkldnn primitives at first iteration for CODEGEN.
    
     OPs: add, lstm, and rnn.
    
    *  OPs: batchnorm.
    
    *  OPs: concat and lrn.
    
    Remove dead code.
    
    * Skip in place concat, relu, reshape, and slice when building node_primitive_string_deps_index map.
    
    * Change NGRAPH_ASSERT to NGRAPH_CHECK.
    
    * Address PR Feedback.
    
    * Create mkldnn primitives at first iteration for CODEGEN.
     OPs: convertlayout, relu, leakyrelu, boundedrelu, sigmoid, softmax, slice.
    
    * Fix bugs.
    
    *  OPs: quantizedconcat.
    
    Check if there are descriptors before emitting code to read desc_file.
    
    *  OPs: convolution backward.
    
    Use macro to write mkldnn memory dims to generated file.
    
    *  OPs: MaxPoolWithIndices and MaxPoolWithIndicesBackprop.
    
    Add unit tests for MaxPoolWithIndices, MaxPoolWithIndicesBackprop, and MaxPoolBackprop.
    
    * Fix style error.
    
    *  OPs: AvgPoolBackprop and MaxPoolBackprop.
    
    Add unit test for AvgPoolBackprop.
    
    *  OPs: DeconvolutionBias.
    
    *  OPs: Quantize and Dequantize.
    
    *  OPs: QuantizedDot and QuantizedDotBias.
    
    * Use reference kernel for QuantizedConvolution for CODEGEN when mkldnn does not support the parameter types.
    Get scales for quantization ops in cpu_emitter.
    
    * Fix Windows build error: add CPU_BACKEND_API.
    
    * Use template for quantization ops.
    
    *  OPs: QuantizedMatmul.
    
    Emit referece kernel for QuantizedDot in CODEGEN.
    
    * Remove QuantizedDot from get_scale_index.
    
    * Address PR feedback.
    
    * [FusedOps] Split (#2951)
    
    * Split op skeleton
    
    * Two ways to construct a fused Split to be able to use it in onnx importer
    
    * refactor: move the util::split() helper functions to the core
    
    * Split's decompose_op() implementation using a helper function
    
    * Use fused Split in the onnx_importer
    
    * Code formatting
    
    * PR feedback
    
    * Split helpers moved to ngraph/builder
    
    * Basic UT - split a 1D tensor to 3 equal parts
    
    * UT: Split 2D tensor into variable length parts
    
    * Code formatting
    
    * Catch the proper type of exception in the onnx_importer split()
    
    * Initialize members in the correct order
    
    * Type prop tests for Split
    
    * Code formatting
    
    * PR feedback
    
    * Add more infrastructure for specialization of cloned graphs (#2949)
    
    * Virtualize some things that crash when layout descriptor is missing
    
    * More shape specialization
    
    * (very bare) skeleton for dyn elimination
    
    * Miscellaneous
    
    * Lift i32->int64-only restriction on constant folding for Convert
    
    * Add constant folding for ShapeOf, and some tests for new constant folders
    
    * Tests for DynElimination
    
    * Rename specialize_shapes to specialize_function, and add a unit test for value substitution
    
    * Roll back overeager API change in dyn slice bprop (it has to handle right-indexed axes; bummer)
    
    * Add a test for dynamic usage of transpose op
    
    * Fix warning/error about variable shadowing
    
    * Strengthen checks in apply_permutation
    
    * Propagate Constant shapes through Transpose
    
    * Add CHANGE_DYNAMIC_STATE where appropriate
    
    * PR feedback, and fix unit test failure
    
    * Fix PR reference in comment
    
    * PR comments
    
    * Comments for helper funcs
    
    * Remove unique_ptr indirection for the AlignedBuffers
    
    * Fix incorrect indexing of AlignedBuffer vector (whoops\!)
    
    * Remove unnecessary CHANGE_DYAMIC_STATEs
    
    * De-update pass property unit test for const folding
    
    * Replace mystery runes with all_pass_property_off
    
    * Change FusionType to enum class and use EnumMask (#2957)
    
    * constexpr ctor for EnumMask
    
    * added pass properties to core passes.
    
    * change fusion type to have better type safety.
    
    * refactor to use enum mask.
    
    * remove extra code.
    
    * added constants for FusionType backward compatibility.
    
    * spelling.
    
    * grammar fix.
    
    * update visualize tree file extenstions and output formats (#2954)
    
    * update visualize tree file extenstions and output formats
    
    * fix runtime error
    
    * Update version, clean up ToC, add more detail to section on inspectin… (#2947)
    
    * Update version, clean up ToC, add more detail to section on inspecting graphs...
    
    * Minor adjustments to version module
    * Move distributed training page to extras since there's not much there
    * Fix links that break when doing that
    * Consistent casing on section titles
    * Orphan governance page so we don't have blank/empty links
    * Update release notes with new version module structure
    
    * PR feedback
    
    * Allow NGRAPH_VISUALIZE_TREE_OUTPUT_SHAPES to output partial shapes (#2959)
    
    * Remove functions from cpu which were moved to core (#2962)
    
    * Remove functions from cpu which were moved to core
    
    * Fix a typo
    
    * Remove unused function
    
    * Move zero padded conv fusions from CPUFusion to CoreFusion. (#2969)
    
    * Move zero padded conv fusions from CPUFusion to CoreFusion.
    
    * Address PR feedback: move unit tests to core_fusion.
    
    * Fix Convert for boolean output type in CODEGEN. (#2958)
    
    * Create tensor for the primary backend (#2970)
    
    * create tensor for the primary backend
    
    * move private objects to protected
    
    *  [Fused] LeakyRelu op (#2919)
    
    * [Fused] LeakyRelu op
    
    * Add LeakyRelu to serializer
    
    * Add unit tests
    
    * Fix merge branch 'master' into mkarzyns/fused_leaky_relu
    
    * Change broadcasting rules to NumPy style
    
    * Remove std:: and ngraph:: prefixes
    
    * Rename CPU Runtime LeakyRelu to CPULeakyRelu
    
    * Style apply
    
    * Fix cpu_fusion.fuse_leaky_relu test
    
    * Use eigen's tanh in the fused sigmoid multiply kernel (#2946)
    
    * Merge branch 'master' into mkarzyns/fused_leaky_relu
    
    * Add LeakyRelu to Intel GPU backend op list
    
    * Add LeakyRelu to Intel GPU backend op list
    
    * Make private members protected in hybrid classes (#2975)
    
    * make private members protected in hybrid classes
    
    * allow overriding the passes
    
    * [ONNX] Unit tests for QLinearMatMul (#2706)
    
    * [ONNX] Unit test models for QLinearMatMul
    
    * [ONNX] Extended types support for NgraphTestCase
    
    * [ONNX] Move the value comparators to the NgraphTestCase class
    
    * Add test cases
    
    * Add shape checking
    
    * disable GPU tests
    
    * IntelGPU backend: Switch to clDNN which is compatible with gcc4.8 (#2961)
    
    * Added accessor methods for layer op attributes (#2964)
    
    * Added accessor methods for layer op attributes
    
    * style fixes and addressed PR feedback
    
    * Add save/load API to runtime (#2955)
    
    * API defined
    
    * add unit test for save/load with INTERPRETER
    
    * Update per review comments
    
    * fix compiler error
    
    * Backport fix from #2973 (#2976)
    
    * CTCGreedyDecoder layer op (#2965)
    
    * Added CTCGreedyDecoder layer op
    
    * Added comment on seq_len validation checks
    
    * Switch some get_inputs uses to use the newer inputs (#2968)
    
    * Switch some get_inputs uses to use the newer inputs
    
    * Review comments
    
    * update a few files to build on windows (#2974)
    
    * update a few files to build on windows
    
    * more fixes
    0995b710
Name
Last commit
Last update
..
files Loading commit data...
models Loading commit data...
onnx Loading commit data...
ref_generators Loading commit data...
util Loading commit data...
CMakeLists.txt Loading commit data...
algebraic_simplification.cpp Loading commit data...
aligned_buffer.cpp Loading commit data...
all_close_f.cpp Loading commit data...
assertion.cpp Loading commit data...
autodiff.in.cpp Loading commit data...
backend_all.in.cpp Loading commit data...
backend_any.in.cpp Loading commit data...
backend_api.cpp Loading commit data...
backend_api.in.cpp Loading commit data...
backend_arg_reduce.in.cpp Loading commit data...
backend_binary_elementwise.in.cpp Loading commit data...
backend_broadcast.in.cpp Loading commit data...
backend_comparison.in.cpp Loading commit data...
backend_debug_api.cpp Loading commit data...
backend_dot.in.cpp Loading commit data...
backend_embedding_lookup.in.cpp Loading commit data...
backend_fused_op.in.cpp Loading commit data...
backend_gather.in.cpp Loading commit data...
backend_graph_comparison.in.cpp Loading commit data...
backend_one_hot.in.cpp Loading commit data...
backend_performance.cpp Loading commit data...
backend_pool.in.cpp Loading commit data...
backend_reshape.in.cpp Loading commit data...
backend_scatter.in.cpp Loading commit data...
backend_sum.in.cpp Loading commit data...
backend_test.in.cpp Loading commit data...
backend_topk.in.cpp Loading commit data...
backend_unary_elementwise.in.cpp Loading commit data...
bfloat16.cpp Loading commit data...
build_graph.cpp Loading commit data...
builder.cpp Loading commit data...
builder_autobroadcast.cpp Loading commit data...
builder_quantization.cpp Loading commit data...
check.cpp Loading commit data...
concat_fusion.cpp Loading commit data...
constant_folding.cpp Loading commit data...
control_dependencies.cpp Loading commit data...
convolution_test.in.cpp Loading commit data...
coordinate.cpp Loading commit data...
copy.cpp Loading commit data...
core.cpp Loading commit data...
core_fusion.cpp Loading commit data...
cpio.cpp Loading commit data...
cpu_codegen.cpp Loading commit data...
cpu_debugger.cpp Loading commit data...
cpu_fusion.cpp Loading commit data...
cpu_test.cpp Loading commit data...
cse.cpp Loading commit data...
distributed.in.cpp Loading commit data...
dyn_elimination.cpp Loading commit data...
dynamic.in.cpp Loading commit data...
element_type.cpp Loading commit data...
event_tracing.cpp Loading commit data...
file_util.cpp Loading commit data...
float16.cpp Loading commit data...
gpu_fusion.cpp Loading commit data...
gpu_test.cpp Loading commit data...
halide.cpp Loading commit data...
hybrid_backend.cpp Loading commit data...
includes.cpp Loading commit data...
input_output_assign.cpp Loading commit data...
main.cpp Loading commit data...
misc.cpp Loading commit data...
misc.hpp Loading commit data...
mkldnn.cpp Loading commit data...
node_input_output.cpp Loading commit data...
nop_elimination.cpp Loading commit data...
op.cpp Loading commit data...
partial_shape.cpp Loading commit data...
pass.cpp Loading commit data...
pass_liveness.cpp Loading commit data...
pass_manager.cpp Loading commit data...
pass_memory_layout.cpp Loading commit data...
pass_shape_relevance.cpp Loading commit data...
pattern.cpp Loading commit data...
provenance.cpp Loading commit data...
reshape_elimination.cpp Loading commit data...
reshape_sinking.cpp Loading commit data...
serialize.cpp Loading commit data...
shape.cpp Loading commit data...
specialize_function.cpp Loading commit data...
tensor.cpp Loading commit data...
tools.cpp Loading commit data...
type_prop.cpp Loading commit data...
type_prop_layers.cpp Loading commit data...
update_reference.sh Loading commit data...
util.cpp Loading commit data...
zero_dim_tensor_elimination.cpp Loading commit data...