无法连接到服务器:localhost/127.0.0.1:9000:尝试一次并失败。 java.net.ConnectException:连接被拒绝

问题描述:

我试图通过运行此把一个文件分成我的本地HDFS:hadoop fs -put part-00000 /hbase/,它给了我这样的:无法连接到服务器:localhost/127.0.0.1:9000:尝试一次并失败。 java.net.ConnectException:连接被拒绝

17/05/30 16:11:52 WARN ipc.Client: Failed to connect to server: localhost/127.0.0.1:9000: try once and fail. 
java.net.ConnectException: Connection refused 
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) 
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) 
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) 
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) 
    at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:681) 
    at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:777) 
    at org.apache.hadoop.ipc.Client$Connection.access$3500(Client.java:409) 
    at org.apache.hadoop.ipc.Client.getConnection(Client.java:1542) 
    at org.apache.hadoop.ipc.Client.call(Client.java:1373) 
    at org.apache.hadoop.ipc.Client.call(Client.java:1337) 
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:227) 
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116) 
    at com.sun.proxy.$Proxy10.getFileInfo(Unknown Source) 
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:787) 
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
    at java.lang.reflect.Method.invoke(Method.java:498) 
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:398) 
    at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:163) 
    at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:155) 
    at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) 
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:335) 
    at com.sun.proxy.$Proxy11.getFileInfo(Unknown Source) 
    at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1700) 
    at org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1436) 
    at org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1433) 
    at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) 
    at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1433) 
    at org.apache.hadoop.fs.Globber.getFileStatus(Globber.java:64) 
    at org.apache.hadoop.fs.Globber.doGlob(Globber.java:269) 
    at org.apache.hadoop.fs.Globber.glob(Globber.java:148) 
    at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1685) 
    at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:326) 
    at org.apache.hadoop.fs.shell.CommandWithDestination.getRemoteDestination(CommandWithDestination.java:195) 
    at org.apache.hadoop.fs.shell.CopyCommands$Put.processOptions(CopyCommands.java:256) 
    at org.apache.hadoop.fs.shell.Command.run(Command.java:164) 
    at org.apache.hadoop.fs.FsShell.run(FsShell.java:315) 
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) 
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90) 
    at org.apache.hadoop.fs.FsShell.main(FsShell.java:378) 
put: Call From steves-macbook-pro.local/172.29.16.117 to localhost:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused 

在此之前,我做的事:$hadoop fs -mkdir /hbase其成功运行。

我检查了我的日志数据节点,这里是它的:

2017-05-30 16:21:48,137 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Problem connecting to server: localhost/127.0.0.1:9000 
2017-05-30 16:21:54,147 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 
2017-05-30 16:21:55,150 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 
2017-05-30 16:21:56,154 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 
2017-05-30 16:21:57,158 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 
2017-05-30 16:21:58,162 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 
2017-05-30 16:21:59,165 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 
2017-05-30 16:22:00,168 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 
2017-05-30 16:22:01,174 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 
2017-05-30 16:22:02,179 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 
2017-05-30 16:22:03,183 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 
2017-05-30 16:22:03,183 WARN org.apache.hadoop.ipc.Client: Failed to connect to server: localhost/127.0.0.1:9000: retries get failed due to exceeded maximum allowed retries number: 10 
java.net.ConnectException: Connection refused 
     at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 
     at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) 
     at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) 
     at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) 
     at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) 
     at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:681) 
     at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:777) 
     at org.apache.hadoop.ipc.Client$Connection.access$3500(Client.java:409) 
     at org.apache.hadoop.ipc.Client.getConnection(Client.java:1542) 
     at org.apache.hadoop.ipc.Client.call(Client.java:1373) 
     at org.apache.hadoop.ipc.Client.call(Client.java:1337) 
     at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:227) 
     at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116) 
     at com.sun.proxy.$Proxy15.versionRequest(Unknown Source) 
     at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolClientSideTranslatorPB.versionRequest(DatanodeProtocolClientSideTranslatorPB.java:274) 
     at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.retrieveNamespaceInfo(BPServiceActor.java:215) 
     at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:261) 
     at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:746) 
     at java.lang.Thread.run(Thread.java:745) 

我发现一对夫妇在计算器上非常类似的问题,总之,这里是我已经试过:

/usr/local/Cellar/hadoop/2.8.0/sbin/stop-all.sh 

/usr/local/Cellar/hadoop/2.8.0/bin/hdfs namenode -format 

/usr/local/Cellar/hadoop/2.8.0/sbin/start-all.sh 

/usr/local/Cellar/hadoop/2.8.0/sbin/start-dfs.sh 

然后我做一个$ JPS,这是我有:

13568 Main 
23154 NodeManager 
13477 HMaster 
21927 DataNode 
12696 Launcher 
13674 GradleDaemon 
22042 SecondaryNameNode 
23052 ResourceManager 
23502 Jps 

另外,我检查了我的/usr/local/Cellar/hadoop/2.8.0/libexec/etc/hadoop/core-site.xml,这是pointi ng到localhost:9000

<configuration> 
<property> 
<name>hadoop.tmp.dir</name> 
<value>/usr/local/Cellar/hadoop/hdfs/tmp</value> 
<description>A base for other temporary directories.</description> 
</property> 
<property> 
<name>fs.default.name</name> 
<value>hdfs://localhost:9000</value> 
</property> 
</configuration> 

因此,不知何故,我的hadoop服务不起来?请指点下一个我应该去的地方?

非常感谢!真的很感激它!

编辑:

我发现别的东西真的很有趣/不可思议:(我不知道为什么发生这种情况,如果它的相关)

  1. 时,我没有datanode运行,我可以访问这个Web UI:http://localhost:50070/以查看我的本地hadoop如何工作。

  2. 当我做

/usr/local/Cellar/hadoop/2.8.0/bin/hdfs namenode -format

/usr/local/Cellar/hadoop/2.8.0/sbin/stop-all.sh

/usr/local/Cellar/hadoop/2.8.0/sbin/start-all.sh

,然后我就jps,我得到了一个正在运行的datanode,但我不能再访问Web UI :http://localhost:50070/

+0

@EJP,非常感谢!对不起,这个愚蠢的问题,但我怎么开始呢?我试过'/ usr/local/Cellar/hadoop/2.8.0/sbin/start-all.sh'和'/ usr/local/Cellar/hadoop/2.8.0/sbin/start-dfs.sh' , 谢谢。 – FisherCoder

+0

你能看到'/ usr/local/Cellar/hadoop/hdfs/tmp'里面是否有任何内容吗?如果是删除那里的一切。格式namenode并启动全部。并让我知道发生了什么 –

原来我错过了我的HDFS-site.xml中某些配置,,

我下面添加到它:

<configuration> 
<property> 
<name>dfs.replication</name> 
<value>1</value> 
</property> 
<property> 
<name>dfs.namenode.name.dir</name> 
<value>file:/Users/USERNAME/data/hadoop/hdfs/nn</value> 
</property> 
<property> 
<name>fs.checkpoint.dir</name> 
<value>file:/Users/USERNAME/data/hadoop/hdfs/snn</value> 
</property> 
<property> 
<name>fs.checkpoint.edits.dir</name> 
<value>file:/Users/USERNAME/data/hadoop/hdfs/snn</value> 
</property> 
<property> 
<name>dfs.datanode.data.dir</name> 
<value>file:/Users/USERNAME/data/hadoop/hdfs/dn</value> 
</property> 
</configuration> 

然后我做了 hadoop namenode -format -force stop-all.sh start-all.sh

它工作正常。

当您首次运行hadoop时,您必须运行命令hdfs namenode -format.otherwise namenode不起作用!