data_ingest.rst 817 Bytes
Newer Older
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31
.. training/data_ingest.rst:

Data Ingestion
##############


Using TensorFlow
----------------

.. include:: tf_dist.rst


Using PaddlePaddle
------------------

.. include:: paddle_dist.rst


Using a custom framework
------------------------

.. include:: ../core/constructing-graphs/distribute-train.rst

To synchronize gradients across all workers, the essential operation for data
parallel training, due to its simplicity and scalability over parameter servers,
is ``allreduce``. The AllReduce op is one of the nGraph Library’s core ops. To
enable gradient synchronization for a network, we simply inject the AllReduce op
into the computation graph, connecting the graph for the autodiff computation
and optimizer update (which then becomes part of the nGraph graph). The
nGraph Backend will handle the rest.