A typical approach to address the computational requirements of large-scale neural networks is to use a heterogeneous distributed environment with a mixture of many CPUs and GPUs. In a new paper submitted to arXiv on 13 Jun 2017, a team from Google uses reinforcement learning to optimize device placement for TensorFlow computational graphs and find non-trivial device placements for Inception-V3 and RNN LSTM.
Read the paper here: https://arxiv.org/abs/1706.04972
Welcome to Steemit!
Downvoting a post can decrease pending rewards and make it less visible. Common reasons:
Submit
Thank you! :)
Downvoting a post can decrease pending rewards and make it less visible. Common reasons:
Submit