Create mkldnn primitives at first iteration for codegen - part2 (#2859)
* Create mkldnn primitives at first iteration for CODEGEN. OPs: add, lstm, and rnn. * OPs: batchnorm. * OPs: concat and lrn. Remove dead code. * Skip in place concat, relu, reshape, and slice when building node_primitive_string_deps_index map. * Change NGRAPH_ASSERT to NGRAPH_CHECK. * Address PR Feedback. * Create mkldnn primitives at first iteration for CODEGEN. OPs: convertlayout, relu, leakyrelu, boundedrelu, sigmoid, softmax, slice. * Fix bugs. * OPs: quantizedconcat. Check if there are descriptors before emitting code to read desc_file. * OPs: convolution backward. Use macro to write mkldnn memory dims to generated file. * OPs: MaxPoolWithIndices and MaxPoolWithIndicesBackprop. Add unit tests for MaxPoolWithIndices, MaxPoolWithIndicesBackprop, and MaxPoolBackprop. * Fix style error. * OPs: AvgPoolBackprop and MaxPoolBackprop. Add unit test for AvgPoolBackprop. * OPs: DeconvolutionBias. * OPs: Quantize and Dequantize. * OPs: QuantizedDot and QuantizedDotBias. * Use reference kernel for QuantizedConvolution for CODEGEN when mkldnn does not support the parameter types. Get scales for quantization ops in cpu_emitter. * Fix Windows build error: add CPU_BACKEND_API. * Use template for quantization ops. * OPs: QuantizedMatmul. Emit referece kernel for QuantizedDot in CODEGEN. * Remove QuantizedDot from get_scale_index. * Address PR feedback.
Showing
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This source diff could not be displayed because it is too large.
You can
view the blob
instead.
This diff is collapsed.
Please
register
or
sign in
to comment