• Pruthvi's avatar
    Pruthvi/bi rnn (#2232) · a444f7a9
    Pruthvi authored
    * - Added reorder support for rnn weights_layer/iter
    
    * i) fixed compilation issues ii) working but still observing precision error
    
    * i) fixed failing rnn unit test for DEX ii) refactored workspace in RNN mkldnn emitter
    
    * i) added support for src reorder to TNC from NTC
    
    * reorder support for rnn output fron NTC to TNC
    
    * - added support for rnn weight reorder ldgoi -> ldigo
    - code refactor for lstm/rnn kernel in mkldnn emitter
    
    * - refactor rnn mkldnnn kernel, change variable names
    
    * fix RNN codegen kernel
    
    * disbale layer rnn fusion pass, to test CI
    
    * method to validate recurrent rnn inputs
    
    * add correlated macthes for Recurrent RNN PM
    
    * - simplify reorder logic for rnn_weights
    - fix graph pattern for fusing rnn cell across time steps
    
    * do weights reorders in rnn timesteps fusion
    
    * refactored LSTM graph pass
    
    * - Bug fix for finding the lstm inputs determenstically
    - Refactored LSTM graph pass to single pass
    - made changes to LSTM RNN time step fusion graph pass
    
    * - use replace_node instead of replace_output in Lstm_step_wise fusion graph pass
    
    * fix compilation error
    
    * Fix GNMT rnn fusion
    
    * check if the node is in use before replacing in RNN graph passes
    
    *  i) fix style ii) fix topo sort issue in RNN graph pass
    
    * style fix
    
    * fix bug in simplify_concat pass
    
    * replaces Lstm1 -> {GOE1, GOE2} -> {Slice1, Slice2} -> Concat -> Lstm2 with Lstm1 -> Lstm2
    
    * cse for convert layout
    
    * addressed PR comments
    
    * - optimization pass to remove  Lstm1 -> {GOE1, GOE2} -> {Slice1, Slice2} -> Lstm2
    - conditional fusing of LSTM cells only for the decoder
    
    * made changes to multi layer RNN fusion callback
    
    * fix asserts in RNN op
    
    * - added support to fuse layers when slc=dlc for RNN cells
    - bug fix on the sanity checks for RNN Op
    
    * - support RNN layer fusion till slc = dlc
    - bug fixes in multi layer rnn fusion call back
    
    * capture reshape in the RNN weights
    
    * Addressed PR comments
    
    * - added comments in multi layer PM call back
    - fuse only if slc == DLC across layers
    
    * restore deleted 3_lstm_cell_forward.json file
    
    * fix typo
    
    * fix failing unit tets
    
    * When processing in place slice, do not change the offset of the slice node if the argument pointer comes from function input.
    
    * Address PR feedback: process in place slice after propagating in place input.
    
    * Set INTERMEDIATE role before propagating in place input.
    
    * Do not add temporaries to the variable name map before propagating in place input in codegen.
    
    * Fix a bug in codegen.
    
    * Fix a bug in codegen slice.
    
    * reenable disabled rnn unit test
    
    * fix compiler error
    
    * - bug fix in the slicing logic for the layer fused rnn cell
    - fix failing rnn unit test
    
    * - Addressed PR comments
    - removed redundant checks from the rnn graph pass
    - simplified rnn call back replace node logic
    
    * - added new multilayer rnn *.json file
    - fix test case
    
    * [PRIVATE BRANCH] Style fixes (#2080)
    
    * Style fixes
    
    * change order of lstm gates
    
    * WIP bi rnn
    
    * [PRIVATE BRANCH] Jbobba/rnn fusion review (#2113)
    
    * Style fixes for single-layer RNN fusion
    
    * Style fixes to multi-layer RNN
    
    * added callback routine for bi-directional rnn
    
    * fix rnn op ctor, rnn mkldnn emitter to accomodate bi directional rnn
    
    * style fix
    
    * added helper function for rnn's to query direction and cell_type
    
    * fix clang error
    
    * - unit test case for bi rnn fusion
    - style fix
    
    * - updated bi-rnn graph pass to handle reverse and reverse_seq ops in the predicate
    - added bi-rnn inter v/s cpu unit test case
    - add support to in mkldnn_utils to create_md with tnc/ntc format
    
    * - added enum type to deduce rnn_type
    
    * Addressed PR comments
        - handle reshapes from {t, n, c} to {n, t, c} in the graph pass
    
    * fix style
    
    * fix clang error
    
    * fix style
    
    * i) move enum specific to rnn to seperate header
    a444f7a9
Name
Last commit
Last update
..
mxnet Loading commit data...
onnx Loading commit data...
tensorflow Loading commit data...
CMakeLists.txt Loading commit data...
conv_bias.json Loading commit data...
tf_conv_mnist_nhwc.json Loading commit data...