分布式Tensorflow:CreateSession仍然只在等待不同节点

问题描述:

我正在尝试获取mnist_replica.py示例工作。根据this问题的建议,我正在指定设备过滤器。分布式Tensorflow:CreateSession仍然只在等待不同节点

我的代码在ps和工作任务在同一个节点上时工作。当我尝试将节点1上的ps任务和节点2上的辅助任务放到“CreateSession仍在等待”时。

例如:

伪分布式版本(作品!)

节点1的终端转储(例如1)

node1 $ python mnist_replica.py --worker_hosts=node1:2223 --job_name=ps --task_index=0 
Extracting /tmp/mnist-data/train-images-idx3-ubyte.gz 
Extracting /tmp/mnist-data/train-labels-idx1-ubyte.gz 
Extracting /tmp/mnist-data/t10k-images-idx3-ubyte.gz 
Extracting /tmp/mnist-data/t10k-labels-idx1-ubyte.gz 
job name = ps 
task index = 0 
2017-10-10 11:09:16.637006: I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:215] Initialize GrpcChannelCache for job ps -> {0 -> localhost:2222} 
2017-10-10 11:09:16.637075: I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:215] Initialize GrpcChannelCache for job worker -> {0 -> node1:2223} 
2017-10-10 11:09:16.640114: I tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:316] Started server with target: grpc://localhost:2222 
... 

节点1的终端转储(例如2)

node1 $ python mnist_replica.py --worker_hosts=node1:2223 --job_name=worker --task_index=0 
Extracting /tmp/mnist-data/train-images-idx3-ubyte.gz 
Extracting /tmp/mnist-data/train-labels-idx1-ubyte.gz 
Extracting /tmp/mnist-data/t10k-images-idx3-ubyte.gz 
Extracting /tmp/mnist-data/t10k-labels-idx1-ubyte.gz 
job name = worker 
task index = 0 
2017-10-10 11:11:12.784982: I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:215] Initialize GrpcChannelCache for job ps -> {0 -> localhost:2222} 
2017-10-10 11:11:12.785046: I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:215] Initialize GrpcChannelCache for job worker -> {0 -> localhost:2223} 
2017-10-10 11:11:12.787685: I tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:316] Started server with target: grpc://localhost:2223 
Worker 0: Initializing session... 
2017-10-10 11:11:12.991784: I tensorflow/core/distributed_runtime/master_session.cc:998] Start master session 418af3aa5ce103a3 with config: device_filters: "/job:ps" device_filters: "/job:worker/task:0" allow_soft_placement: true 
Worker 0: Session initialization complete. 
Training begins @ 1507648273.272837 
1507648273.443305: Worker 0: training step 1 done (global step: 0) 
1507648273.454537: Worker 0: training step 2 done (global step: 1) 
... 

个2个节点的分布式(不工作),节点1

节点2

node2 $ python mnist_replica.py --ps_hosts=node1:2222 --worker_hosts=node2:2222 --job_name=worker --task_index=0 
Extracting /tmp/mnist-data/train-images-idx3-ubyte.gz 
Extracting /tmp/mnist-data/train-labels-idx1-ubyte.gz 
Extracting /tmp/mnist-data/t10k-images-idx3-ubyte.gz 
Extracting /tmp/mnist-data/t10k-labels-idx1-ubyte.gz 
job name = worker 
task index = 0 
2017-10-10 10:51:13.303021: I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:215] Initialize GrpcChannelCache for job ps -> {0 -> node1:2222} 
2017-10-10 10:51:13.303081: I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:215] Initialize GrpcChannelCache for job worker -> {0 -> localhost:2222} 
2017-10-10 10:51:13.308288: I tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:316] Started server with target: grpc://localhost:2222 
Worker 0: Initializing session... 
2017-10-10 10:51:23.508040: I tensorflow/core/distributed_runtime/master.cc:209] CreateSession still waiting for response from worker: /job:ps/replica:0/task:0 
2017-10-10 10:51:33.508247: I tensorflow/core/distributed_runtime/master.cc:209] CreateSession still waiting for response from worker: /job:ps/replica:0/task:0 
... 

运行CentOS7,Tensorflow R1.3,Python 2.7版两个节点的

node1 $ python mnist_replica.py --worker_hosts=node2:2222 --job_name=ps --task_index=0 
Extracting /tmp/mnist-data/train-images-idx3-ubyte.gz 
Extracting /tmp/mnist-data/train-labels-idx1-ubyte.gz 
Extracting /tmp/mnist-data/t10k-images-idx3-ubyte.gz 
Extracting /tmp/mnist-data/t10k-labels-idx1-ubyte.gz 
job name = ps 
task index = 0 
2017-10-10 10:54:27.419949: I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:215] Initialize GrpcChannelCache for job ps -> {0 -> localhost:2222} 
2017-10-10 10:54:27.420064: I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:215] Initialize GrpcChannelCache for job worker -> {0 -> node2:2222} 
2017-10-10 10:54:27.426168: I tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:316] Started server with target: grpc://localhost:2222 
... 

终端转储的

终端转储。节点可以通过ssh相互通话,主机名是正确的,防火墙被禁用。有什么遗漏?

是否有任何额外的步骤需要采取以确保节点之间可以使用GRPC进行对话? 谢谢。

我想你会更好地检查ClusterSpec和服务器部分。例如,您应该检查node1和node2的IP地址,检查端口和任务索引等。我想给出具体的建议,但是如果没有代码,很难给出建议。谢谢。

+0

非常感谢您的回复。我刚刚用终端转储,系统信息编辑了我的原始问题。我会感谢任何帮助。 – Sid

+0

嗯..我想如果你实现的代码类似于你给的url,很难在代码中找到错误。 检查node1和node2之间的网络连接情况如何?例如,检查端口,或ping到node1 ... – jwl1993

+1

我使用两个节点的服务器上面的代码mnist_replica.py测试它,它工作正常。 我认为这不是代码中的错误。你最好检查其他的东西。 – jwl1993

问题是防火墙阻塞了端口。我在问题的所有节点上禁用了防火墙,问题自行解决!