1. 25 Feb, 2020 1 commit
  2. 24 Feb, 2020 7 commits
  3. 21 Feb, 2020 11 commits
  4. 20 Feb, 2020 2 commits
    • Mateusz Bencer's avatar
      [ONNX] Add dynamic shapes support for pooling ops (#4285) · fdd8db66
      Mateusz Bencer authored
      * Switch to PartialShape in onnx_importer ValueInfo
      
      * Construct dynamic dimensions out of ONNX dimensions defined as dim_param
      
      * Validate the PartialShape of inputs created from an ONNX model with dynamic shapes
      
      * Validate the output shape inference for a dynamic ONNX model
      
      * Test the execution of an ONNX model with dynamic dimensions
      
      * Test the Ax+B with more than one batch size
      
      * Provenance tagging adjustments - PartialShape instead of Shape
      
      * Correct translation of ONNX shapes to nG shapes
      
      * Test the shape of Constant produced by scalar initializers
      
      * Review comments & more strict assertions in UT
      
      * UT checking a dynamic rank input
      
      * Fully dynamic input inference test
      
      * first dynamic version
      
      * modified UTs
      
      * Added assert checks
      
      * Added specialised methods
      
      * first verion of AvgPool
      
      * code review remarks introduced
      
      * Changed tests to use default BackendMode value
      
      * Reverted not related changes
      
      * first verion of AvgPool
      
      code review remarks introduced
      
      Changed tests to use default BackendMode value
      
      * first version of maxpool
      
      * Changed PoolingFactory to support dynamic shapes
      
      * fixed Pad op
      
      * Added Uts to global ops
      
      * Code review remarks introduced
      
      * code review remarks introduced
      
      * Code refactor
      
      * Code review remarks introduced
      Co-authored-by: 's avatarTomasz Dołbniak <tomasz.dolbniak@intel.com>
      Co-authored-by: 's avatarSang Ik Lee <sang.ik.lee@intel.com>
      fdd8db66
    • Ewa Tusień's avatar
      Add Round op to ONNX importer (#4303) · d605e7fa
      Ewa Tusień authored
      * Added Round op to onnx importer.
      
      * Added a new line.
      
      * Excluded test from plaidml.
      
      * Code formatting.
      
      * Added header.
      
      * Unabled test on gpu.
      Co-authored-by: 's avatarChris Sullivan <chris.sullivan@intel.com>
      Co-authored-by: 's avatarSang Ik Lee <sang.ik.lee@intel.com>
      d605e7fa
  5. 19 Feb, 2020 5 commits
  6. 18 Feb, 2020 6 commits
    • Adam Osewski's avatar
      Unify (static) auto-broadcasting helpers. (#4242) · fe054d67
      Adam Osewski authored
      * Helper function get_axes_mapping.
      
      * Enhance Broadcast:v1 NUMPY broadcasting.
      
      - Enable NUMPY broadcasting mechanism to work in bothdirections:
          target_shape <-> arg_shape
      
      * Add opset1:squeeze and fix bug in reading squeezed axis idx.
      
      * Fix and enhance downgrade pass for Broadcast:v1
      
      * Use Broadcast:v1 in ONNX Expand operator.
      
      * Replace Broadcast:v0 with v1 in some helper functions.
      
      * Remove call to deprecated legacy_broadcasting helper function.
      
      * Add helper get_axes_mapping_output function.
      
      * Use directly Broadcast:v1 instead of helper function.
      
      * Get back operators from v0 in helper function.
      
      * Use helper function and some refactoring.
      
      * Add legacy style broadcast helper function for opset1.
      
      * User helper broadcasting function for arithmetic operators.
      
      * Add empty axis only if its size is equal to one.
      
      * Aplly review remarks:
      
      - Rename broadcasting function deleting _values_ infix
      - Remove variables used only once.
      - Use STL library where possible.
      - Remove unnecessary conditions.
      
      * Add helper for Broadcast:v1.
      
      * Fix merge artifact and force unsigned type for argument.
      
      * Review. Add additional check for static output.
      
      * Apply clang-format.
      
      * Fix: call v0 ops in ngraph::builder namespace.
      
      * Move opset1 boradcasting helpers from util/broadcasting.hpp
      
      * Use autobroadcast built-in mechanism for arithmetic operators in RNN.
      
      * Move helper functions to autobroadcast.hpp file.
      
      - Update calls with new namespace ngraph::builder
      - Remove calls using shared_ptr<ngraph::Node> and replace them with
        one accepting Output<ngraph::Node>
      - Some small formatting (remove unnecesary namespace prefix)
      
      * Remove unused function.
      
      * Rename error class to reflect it's NumPy related.
      
      * Fix thrown error name in autobroadcast UT.
      
      * Code refactoring.
      
      - Use one one set of helpers to broadcast node according to NumPy scheme
      
      * Documentation formatting.
      
      * Remove include to deleted header.
      
      * Apply style format.
      
      * Remove std:: prefix.
      
      * Do reshape and/or broadcast only when necessary.
      
      * Remove std:: and ngraph:: prefixes.
      
      * UT for numpy_broadcast_for_matmul and legacy boradcast.
      
      * Rename helper function.
      
      * UT for opset1 legacy broadcast helper function.
      
      * Add more UT for get_axes_mapping and style-format.
      
      * Review comments.
      
      * Restrict if with NGRAPH_WARN to NGRAPH_CHECK.
      Co-authored-by: 's avatarMichał Karzyński <postrational@users.noreply.github.com>
      Co-authored-by: 's avatarSang Ik Lee <sang.ik.lee@intel.com>
      fe054d67
    • Diego Caballero's avatar
    • Michał Karzyński's avatar
      5803d20f
    • Michał Karzyński's avatar
      Use default_opset in ONNX importer code (#4332) · 7f98942d
      Michał Karzyński authored
      * Use default_opset in Gemm
      
      * Use default_opset in Cos and Cosh
      
      * Use default_opset in Slice
      
      * Use default_opset in Log
      
      * Use explicit opset0 in ScatterND
      
      * Use default_opset in Hardmax
      
      * Use default_opset in utils
      
      * Remove redundant headers
      
      * Style check
      7f98942d
    • mbencer's avatar
      Merge branch 'mbencer/BuilderSplitV1' of… · 195b1aa8
      mbencer authored
      Merge branch 'mbencer/BuilderSplitV1' of https://github.com/NervanaSystems/ngraph into mbencer/BuilderSplitV1
      195b1aa8
    • mbencer's avatar
      Code review remarks introduced · 68698d51
      mbencer authored
      68698d51
  7. 16 Feb, 2020 1 commit
    • Tomasz Dołbniak's avatar
      POC enabling Resnet50 with dynamic batch dimension (#4298) · b2da4cee
      Tomasz Dołbniak authored
      * Constify the onnx importer conv
      
      * Extract and fix the groups attribute validation for Conv
      
      * Check if the convolution's data input rank is static
      
      * Validate the groups attribute against channels and filters
      
      * Validate the conv operation in a separate function
      
      * Dynamically broadcast the conv bias if needed
      
      * Import a test model with dynamic batch conv op
      
      * Run a conv test with dynamic batch
      
      * Cleanup of conv bias handling code
      
      * Use a proper Broadcast constructor for bias in onnx conv
      
      * Handle dynamic ReduceMean with statically defined rank
      
      * Use the target shape rank to construct the default output shape for Broadcast
      
      * Handle ONNX Squeeze with dynamic input and static rank
      
      * Handle ONNX Shape with dynamic input and static rank
      
      * Handle the dynamic target shape in ONNX Reshape
      
      * Fix for the ONNX Shape input validation
      
      * Handle ONNX Softmax with dynamic input and static rank
      
      * Fix the failing Broadcast type prop test
      
      * Code formatting
      
      * Dont broadcast bias before adding it to the conv node
      
      * Drop the conv node validation and rely on the core op implementation checks
      
      * Code review feedback
      
      * Revert the Broadcast op changes
      
      * More code review feedback
      
      * Dynamic conv test using ng test case
      
      * Obsolete headers removal
      
      * Code formatting
      
      * Variable names refactor
      
      * Disable model_conv_with_dynamic_batch test on GPU
      
      * Code formatting
      Co-authored-by: 's avatarSang Ik Lee <sang.ik.lee@intel.com>
      b2da4cee
  8. 14 Feb, 2020 4 commits
  9. 13 Feb, 2020 3 commits