hadoop2.6修复openforwrite块

azkaban任务在日志解析时卡住,来公司后看到如下错误,fsck文件后发现一个块处于openforwrite状态,无法读。同事说昨天重启过节点,猜测是flume采集日志落盘在hdfs时未正常关闭块产生的问题,文件未损坏,只是租约未释放,2.7上有恢复命令(hdfs debug recoverlease),but我们用的是2.6,于是网上找了段代码,执行成功,特此留念。

hadoop2.6修复openforwrite块

 

代码:

mkdir -p com/hdfsclient

touch Recover.java

package com.hdfsclient;

import java.io.IOException;
import java.net.URI;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hdfs.DistributedFileSystem;

public class Recover {
    public static void main(String[] args) {
        try {
            recoverlease(args[0]);
        } catch (IOException e) {
            e.printStackTrace();
        }
    }

    public static void recoverlease(String path) throws IOException {
        DistributedFileSystem fs = new DistributedFileSystem();
        Configuration conf = new Configuration();
        fs.initialize(URI.create(path), conf);
        fs.recoverLease(new Path(path));
        fs.close();
    }
}

javac -cp $(/data/service/jg/hadoop-2.6.0/bin/hadoop classpath) Recover.java
jar -cvf xx.jar com
hadoop jar xx.jar com.hdfsclient.Recover hdfs://doumi-ana-online/user/rd/.Trash/Current/data/service_data/hadoop/weblogs/all.push.msg.queue/19-06-05/pushAll.1559718551401.log