首页 > 代码库 > Hadoop中FileSystem的append方法

Hadoop中FileSystem的append方法

  今天在使用Hadoop 1.1.2版本进行FileSystem的append操作时报以下异常:

1  org.apache.hadoop.ipc.RemoteException: java.io.IOException: Append is not supported. Please see the dfs.support.append configuration parameter.

Google了一下,发现Hadoop 1.x版本不支持FileSystem的append操作,官方Hadoop 1.1.2 Release Notes如下:

    • HADOOP-8230. Major improvement reported by eli2 and fixed by eli 
      Enable sync by default and disable append
      Append is not supported in Hadoop 1.x. Please upgrade to 2.x if you need append. If you enabled dfs.support.append for HBase, you‘re OK, as durable sync (why HBase required dfs.support.append) is now enabled by default. If you really need the previous functionality, to turn on the append functionality set the flag "dfs.support.broken.append" to true.

  上面的解释明显的提到如果需要使用append操作,需要升级到hadoop 2.x版本。并且需要在Conf的hdfs.site.xml文件中加入如下配置: 

1  <property>2     <name>dfs.support.append</name>3     <value>true</value>4   </property>

 Hadoop的API中也提供了设置项来支持内容追加,代码如下:

1 Configuration conf = new Configuration();2 conf.setBoolean("dfs.support.append", true);

  不过,在已有的文件中追加内容是一件需要斟酌的操作,原因如下:

【Raghava的邮件回复:】In short, appends in HDFS are extremely experimental and dangerous. Most would advise you to leave this disabled. Your best option for "append" like behavior is to rewrite the file with new content being added at the end. Append support was briefly introduced and then removed as a number of issues came up. I believe the open (parent) JIRA issue tracking this is:http://issues.apache.org/jira/browse/HDFS-265

  如果想深究其原因,可以参考网址:http://issues.apache.org/jira/browse/HDFS-265