.. training/data_ingest.rst:Data Ingestion##############Using TensorFlow----------------.. include:: tf_dist.rstUsing PaddlePaddle------------------.. include:: paddle_dist.rstUsing a custom framework------------------------.. include:: ../core/constructing-graphs/distribute-train.rstTo synchronize gradients across all workers, the essential operation for dataparallel training, due to its simplicity and scalability over parameter servers,is ``allreduce``. The AllReduce op is one of the nGraph Library’s core ops. Toenable gradient synchronization for a network, we simply inject the AllReduce opinto the computation graph, connecting the graph for the autodiff computationand optimizer update (which then becomes part of the nGraph graph). ThenGraph Backend will handle the rest.