- 05 Feb, 2018 5 commits
-
-
Jayaram Bobba authored
Merge branch 'jbobba/mkldnn-outlining' of https://github.com/NervanaSystems/private-ngraph-cpp into jbobba/mkldnn-outlining
-
Jayaram Bobba authored
-
Jayaram Bobba authored
-
Jayaram Bobba authored
-
Nick Korovaiko authored
inline Inliner pass + tests debugging fix inliner failures due to the fact a random function is picked as an outermost one copyright headers
-
- 03 Feb, 2018 3 commits
-
-
Robert Kimball authored
fix clone of function with multiple outputs
-
Scott Cyphers authored
-
Robert Kimball authored
add robust replacement for is_functionally_identical that relies on comparing emitting functions as string (#441)
-
- 02 Feb, 2018 3 commits
-
-
Yixing Lao authored
-
Adam Procter authored
-
adstraw authored
* mark convolution and pooling as non-functionally equivalent temporary workaround for MKLDNN issue * fix build warning
-
- 01 Feb, 2018 4 commits
-
-
Scott Cyphers authored
* More graph construction * Review comments
-
Robert Kimball authored
* fix resource generator dependencies * add cr to end of file
-
Scott Cyphers authored
-
Nick Korovaiko authored
* simplification pass * serializer change to test models * some small test fixes * addressing Scott's feedback * missed one nn * formatting fixes * simplification -> reshape_elimination
-
- 31 Jan, 2018 2 commits
-
-
L.S. Cook authored
* WIP on finding a good format for op docs in RST * A few more scribbles * fix up branch for Amazon code share * add conf.py configuration details from aproctor's branch for doxy-breathe integration * update section on how to build the documentation with breathe install details * Remove empty file on training, update framework integration notes * Add CentOS stub, fix spelling, core op definition, add to glossary. * more documentation cleanup on README and installation and testing * more cleanup of docs for TernsorFlow * Simplify Dot Autodiff (#412) * Simplify Dot Autodiff * remove commented code * Remove TupleType, ValueType (#411) * Remove TupleType, ValueType * Fix compile error. * Change convolution reference to work with f32 (#409) * Drwebb/gpu backend dot op (#413) * Drwebb/gpu backend dot op (#387) * GPU Dot prod emitter switch statement * cuBLAS dot kernel call * Flush out arg substitution into gpu dot kernel call * Drwebb/gpu backend dot op (#392) * Take in CodeWriter into gpu op emitters * Introduce GPU function gen based on pass functions * Additional gpu emitter stubs * link cublas in to unit test and ngraph * Use static code gen methods for GPU, add new GPU op stubs * use pass manager to declare functions / cublas Updates * Prune down gpu_external_function wip * Switch back to GPU tensor views in GPU backend * Pass in cublas handle to GPU external function * cuMalloc memory in gpu tensor view * Use cuda runtime malloc and free for tensor view managment c * change GPU tensor view init, and use GPU tensor view for GPU call frame * include headers as system dirs * GPU tensor printing utility function * cublasSetPointer to device mode / Fix copyright notification lowercasing * Passing GPU dot product test using cuBLAS Clean up * Changes from review * Add an overivew. * Intro for building graphs. * Refactor docs so that Doxygen and Sphinx are integrated (Sphinx depends on Doxygen with the docstrings stuff) Still need to resolve a lingering assumption that the build dir is contained in private-ngraph-cpp. It's proving to be surprisingly tricky. * Added the TensorFlow XLA build information and example of how to run MNIST MLP with TF/nGraph * Updated TF integration guide for clarity. Added files from cyphers-amazon branch. Add minor changes to sphinx-doxy to test apis * Small revision of overview and add graphic from arXiv paper * WIP more editing, picking up from where I left off last week * Fix garbled sentence edit * WIP Edit for readability and such : * Better font rendering on all architectures included with our custom theme * Cleanup current version of documentation. Add NeoSans font binaries to make local font rendering of h1 h2 etc * Missed merge conflict * Add something on functions, don't forward-reference parameters * What we have so far into a PR for review * Need file for cmake * Missing header * Remove duplicate file * added breathe package to contrib/docker/Dockerfile.ngraph_cpp
-
Nick Korovaiko authored
* bprop for avg pool remove debug statements + formatting * fix CPU test failures * numeric tests * use make_shared; unprotect c-tor
-
- 30 Jan, 2018 3 commits
-
-
Robert Kimball authored
-
Nick Korovaiko authored
cblas_gemm working on mlp rebase & small fixes enable debug output support replacing function's outputs productizing CPUFusion addressing Bob and Jayaram's feedback removing json used for simplification tests adding comments fixing formatting errors and removing dead code TODO msg removing serializer changes
-
Adam Procter authored
-
- 29 Jan, 2018 6 commits
-
-
Jayaram Bobba authored
Jmenon/maxpooling
-
Jaikrishnan Menon authored
-
Jaikrishnan Menon authored
-
Jaikrishnan Menon authored
-
Jaikrishnan Menon authored
-
Adam Procter authored
-
- 28 Jan, 2018 1 commit
-
-
Robert Kimball authored
-
- 27 Jan, 2018 2 commits
-
-
Jaikrishnan Menon authored
-
Jaikrishnan Menon authored
-
- 26 Jan, 2018 2 commits
-
-
Jaikrishnan Menon authored
-
Jayaram Bobba authored
Update to newer version of MKLDNN with verbose debug output. Pull fro…
-
- 25 Jan, 2018 1 commit
-
-
Jayaram Bobba authored
Update to newer version of MKLDNN with verbose debug output. Pull from github.com/intel/mkl-dnn instead of github.com/01org/mkl-dnn
-
- 24 Jan, 2018 5 commits
-
-
Jayaram Bobba authored
* Support for MKLDNN convolution fprop in CPU backend based on Jai's implementation (no layout yet)
-
Tristan Webb authored
* Drwebb/gpu backend dot op (#387) * GPU Dot prod emitter switch statement * cuBLAS dot kernel call * Flush out arg substitution into gpu dot kernel call * Drwebb/gpu backend dot op (#392) * Take in CodeWriter into gpu op emitters * Introduce GPU function gen based on pass functions * Additional gpu emitter stubs * link cublas in to unit test and ngraph * Use static code gen methods for GPU, add new GPU op stubs * use pass manager to declare functions / cublas Updates * Prune down gpu_external_function wip * Switch back to GPU tensor views in GPU backend * Pass in cublas handle to GPU external function * cuMalloc memory in gpu tensor view * Use cuda runtime malloc and free for tensor view managment c * change GPU tensor view init, and use GPU tensor view for GPU call frame * include headers as system dirs * GPU tensor printing utility function * cublasSetPointer to device mode / Fix copyright notification lowercasing * Passing GPU dot product test using cuBLAS Clean up * Changes from review
-
Adam Procter authored
-
Scott Cyphers authored
* Remove TupleType, ValueType * Fix compile error.
-
Matthew Brookhart authored
* Simplify Dot Autodiff * remove commented code
-
- 23 Jan, 2018 1 commit
-
-
adstraw authored
* fix convlution reference script * convolution backprop * cleanup * fix build warnings * Missing include * fix build warning part 2 * move numeric_compare to its own header code review feedback * fix build warnings 3 * fix build warnings 4 * clang-format * cast to avoid implicit cast warning
-
- 22 Jan, 2018 1 commit
-
-
L.S. Cook authored
* adding work and organization for docs * Summary of changes - removed rst file from css folder - added substitutions to conf file so folks don't have to type out the whole name - added branding notice so link in footer works - added glossary so we can have a place for terms relative to libngraph - added placeholder for MXNet and TF frontend under new section - cleaned up api page * Fix code fencing error and indentation issue causing build to throw error * adding MXNet integration guide details and other miscellaneous structure * fix unicode error in rst epilog * testing with unicode marker for rst_epilog string * replace api scaffolding with TODO to avoid breaking Jenkins
-
- 21 Jan, 2018 1 commit
-
-
Robert Kimball authored
-