TensorFlow的开源版本支持异步梯度下降,甚至无需修改图形。最简单的方法是并行执行多个并发步骤:
loss = ...
# Any of the optimizer classes can be used here.
train_op = tf.train.GradientDescentOptimizer(0.01).minimize(loss)
sess = tf.Session()
sess.run(tf.initialize_all_variables())
def train_function():
# TODO: Better termination condition, e.g. using a `max_steps` counter.
while True:
sess.run(train_op)
# Create multiple threads to run `train_function()` in parallel
train_threads = []
for _ in range(NUM_CONCURRENT_STEPS):
train_threads.append(threading.Thread(target=train_function))
# Start the threads, and block on their completion.
for t in train_threads:
t.start()
for t in train_threads:
t.join()
本示例设置对的NUM_CONCURRENT_STEPS
调用sess.run(train_op)
。由于这些线程之间没有协调,因此它们异步进行。
(目前)实现同步并行训练实际上更具挑战性,因为这需要额外的协调以确保所有副本都读取相同版本的参数,并确保所有副本同时可见。用于CIFAR-10训练的多GPU示例通过在训练图中使用共享参数制作“塔”的多个副本,并在应用更新之前显式平均塔上的梯度来执行同步更新。
注意:此答案中的代码将所有计算都放在同一设备上,如果您的计算机中有多个GPU,这将不是最佳选择。如果要使用所有GPU,请遵循多GPU CIFAR-10模型的示例,并创建多个“塔”,并将其操作固定在每个GPU上。该代码大致如下:
train_ops = []
for i in range(NUM_GPUS):
with tf.device("/gpu:%d" % i):
# Define a tower on GPU `i`.
loss = ...
train_ops.append(tf.train.GradientDescentOptimizer(0.01).minimize(loss))
def train_function(train_op):
# TODO: Better termination condition, e.g. using a `max_steps` counter.
while True:
sess.run(train_op)
# Create multiple threads to run `train_function()` in parallel
train_threads = []
for train_op in train_ops:
train_threads.append(threading.Thread(target=train_function, args=(train_op,))
# Start the threads, and block on their completion.
for t in train_threads:
t.start()
for t in train_threads:
t.join()
请注意,您可能会发现使用“可变范围”方便了塔之间的变量共享。
本文收集自互联网,转载请注明来源。
如有侵权,请联系[email protected] 删除。
我来说两句